OpenTofu state
azimuth-ops
uses OpenTofu to manage the K3S node in both the
single node and high-availability deployment methods.
Info
OpenTofu is an open-source fork of Terraform.
In most cases, variable names etc. used in Azimuth currently still refer to terraform_
.
In order to keep track of the resources that it has created, and how they map to the resources
in the OpenTofu configuration generated by azimuth-ops
, OpenTofu must
store its state somewhere. The location of the
state is determined by the
backend configuration.
Each environment in an azimuth-config
repository has a corresponding OpenTofu state, and
they are independent from each other.
Local state
By default azimuth-ops
will use the local
backend, which stores the OpenTofu state as
a file on the local disk in the .work
directory. This requires no explicit configuration,
but comes with the usual caveats about keeping important state on your local machine.
Not suitable for production
Local state is sufficient for a demonstration or evaluation, but for a shared or production deployment it is recommended to use remote state.
Remote state
OpenTofu supports a number of remote backends that can be used to persist state independently of where a deployment is run. This allows deployments to be made from anywhere that can access the state without corrupting or conflicting with any existing resources from previous deployments.
Tip
If you want to use the same remote backend configuration for multiple environments, consider using a site mixin environment to avoid specifying the configuration multiple times.
Warning
In order to avoid multiple writers when using remote state, it is recommended to use a backend that supports state locking.
Secret configuration
Some of the configuration variables for remote backends, e.g. passwords and keys, should be kept secret. If you want to keep such variables in Git - which is recommended where possible - then they must be encrypted.
GitLab
Tip
This is the recommended option if you are using GitLab for your config repository.
If you are using GitLab to host your configuration repository, either gitlab.com
or
self-hosted, you can use
GitLab-managed Terraform state
to store the states for your environments.
GitLab provides a HTTP backend that can be configured as follows:
terraform_backend_type: http
# The API base URL for the target GitLab project
# For a self-hosted GitLab instance, replace gitlab.com with your domain
gitlab_project_url: "https://gitlab.com/api/v4/projects/<project id>"
# The state endpoint for the environment
#
# Using the azimuth_environment variable as the state name means that each
# concrete environment gets a separate managed Terraform state even if this
# configuration is in a shared mixin environment
terraform_http_address: "{{ gitlab_project_url }}/terraform/state/{{ azimuth_environment }}"
# The state-locking and unlocking endpoints for the environment
terraform_http_lock_address: "{{ terraform_http_address }}/lock"
terraform_http_lock_method: POST
terraform_http_unlock_address: "{{ terraform_http_lock_address }}"
terraform_http_unlock_method: DELETE
terraform_backend_config:
address: "{{ terraform_http_address }}"
lock_address: "{{ terraform_http_lock_address }}"
lock_method: "{{ terraform_http_lock_method }}"
unlock_address: "{{ terraform_http_unlock_address }}"
unlock_method: "{{ terraform_http_unlock_method }}"
Tip
If you want to use the same backend configuration for multiple environments, consider using a site mixin environment to avoid specifying the configuration multiple times.
The username and password (or token) that are used to authenticate with GitLab to manage
the states are set using the TF_HTTP_USERNAME
and TF_HTTP_PASSWORD
environment
variables respectively.
If you are using GitLab CI/CD to automate deployments, then the pipeline will be issued with a suitable token. The sample configuration includes configuration to populate these variables using this token.
If you are not using automation but your GitLab installation has
project access tokens
available, you can configure a project access token and store it (encrypted!) in the
env.secret
file, referencing the bot username:
If you need to access an environment deployed using automation, or you do not have project access tokens available, then you can use a Personal access token, which at least avoids using your password.
Never commit personal access tokens
You should never commit a personal access token to the configuration repository, even encrypted, because it is not possible to set a project scope.
If using a personal access token, you should export the relevant variables before activating an environment:
export TF_HTTP_USERNAME="<username>"
export TF_HTTP_PASSWORD="<token>"
source ./bin/activate my-site
S3
OpenTofu also has an S3 backend that is able to store state in any S3-compatible object store, such as Amazon S3 or Ceph Object Gateway.
Depending on the provider of your object store, the specific configuration options in the following section may differ. The example configuration shown here is for a Ceph object store as Ceph is often used together with OpenStack.
See the OpenTofu docs for the S3 backend for all the available options.
terraform_backend_type: s3
# The endpoint of the object store
terraform_s3_endpoint: object.example.com
# The region to use
# Ceph does not normally use the region, but OpenTofu requires it
terraform_s3_region: not-used-but-required
terraform_s3_skip_region_validation: "true"
# The bucket to put OpenTofu states in
# NOTE: This bucket must already exist - it will not be created by OpenTofu
terraform_s3_bucket: azimuth-opentofu-states
# The key to use for the state for the environment
#
# Using the azimuth_environment variable in the key means that the state
# for each concrete environment is stored in a separate key, even if this
# configuration is in a shared mixin environment
terraform_s3_key: "{{ azimuth_environment }}.tfstate"
# The STS API doesn't exist for Ceph
terraform_s3_skip_credentials_validation: "true"
# Tell OpenTofu to use path-style URLs, e.g. <host>/<bucket>, instead of
# subdomain-style URLs, e.g. <bucket>.<host>
terraform_s3_force_path_style: "true"
terraform_backend_config:
endpoint: "{{ terraform_s3_endpoint }}"
region: "{{ terraform_s3_region }}"
bucket: "{{ terraform_s3_bucket }}"
key: "{{ terraform_s3_key }}"
skip_credentials_validation: "{{ terraform_s3_skip_credentials_validation }}"
force_path_style: "{{ terraform_s3_force_path_style }}"
skip_region_validation: "{{ terraform_s3_skip_region_validation }}"
The S3 credentials (access key ID and secret), can be specified either as environment variables or as Ansible variables.
To use environment variables, just place the credentials in env.secret
:
To use Ansible variables, they should be added to secrets.yml
and referenced in
the terraform_backend_config
variable:
terraform_s3_access_key: "<access key id>"
terraform_s3_secret_key: "<secret key>"
terraform_backend_config:
# ... other options ...
access_key: "{{ terraform_s3_access_key }}"
secret_key: "{{ terraform_s3_secret_key }}"
Created: April 9, 2024