Skip to content

Deploying on OpenStack

Symplegma supports OpenStack as a cloud provider.

Architecture

For now, each cluster should have its own OpenStack project. Modules used are from the Kubepsray project and are included inside the symplegma module, this is subject to change in the future as PR are welcomed to make the possibilities evolved and split modules.

Requirements

Git clone Symplegma main repository:

git clone https://github.com/clusterfrak-dynamics/symplegma.git

Fetch the roles with ansible-galaxy:

ansible-galaxy install -r requirements.yml

Terraform and Terragrunt

Terragrunt is used to enable multiple cluster and environments.

Terragrunt modules

Symplegma is packaged in a Terragrunt module available here.

Terragrunt variables

Cluster specific variables:

terragrunt = {
  include {
    path = "${find_in_parent_folders()}"
  }

  terraform {
    source = "github.com/clusterfrak-dynamics/symplegma.git//contrib/openstack/terraform/modules/symplegma"
  }
}

//
// [provider]
//

//
// [kubernetes]
//
cluster_name = "symplegma"

network_name    = "internal_network"
subnet_cidr     = "10.0.0.0/24"
dns_nameservers = []
use_neutron     = "1"

number_of_k8s_masters         = "3"
number_of_k8s_masters_no_etcd = "0"
number_of_k8s_nodes           = "0"
floatingip_pool               = "ext-net"
number_of_bastions            = "0"
external_net                  = "ext-net-uuid"
router_id                     = ""

az_list                                      = ["nova"]
number_of_etcd                               = "0"
number_of_k8s_masters_no_floating_ip         = "0"
number_of_k8s_masters_no_floating_ip_no_etcd = "0"
number_of_k8s_nodes_no_floating_ip           = "0"
public_key_path                              = "~/.ssh/id_rsa.pub"
image                                        = "CoreOS 1068.9.0"
ssh_user                                     = "core"
flavor_k8s_master                            = "128829e3-117d-49da-ae58-981bb2c04b0e"
flavor_k8s_node                              = "128829e3-117d-49da-ae58-981bb2c04b0e"
flavor_etcd                                  = "128829e3-117d-49da-ae58-981bb2c04b0e"
flavor_bastion                               = "128829e3-117d-49da-ae58-981bb2c04b0e"
k8s_master_fips                              = []
k8s_node_fips                                = []
bastion_fips                                 = []
bastion_allowed_remote_ips                   = ["0.0.0.0/0"]
supplementary_master_groups                  = ""
supplementary_node_groups                    = ""
worker_allowed_ports                         = [
  {
    "protocol" = "tcp"
    "port_range_min" = 30000
    "port_range_max" = 32767
    "remote_ip_prefix" = "0.0.0.0/0"
  }
]

Creating the infrastructure

To init a new OpenStack cluster, simply run ./scripts/init-openstack.sh $CLUSTER_NAME

It will generate inventory/openstack/$CLUSTER_NAME with the following directory structure:

sample
├── openstack.py -> ../../../contrib/openstack/inventory/openstack.py
├── extra_vars.yml
├── group_vars
│   └── all
│       └── all.yml
├── host_vars
├── symplegma-ansible.sh -> ../../../contrib/openstack/scripts/symplegma-ansible.sh
└── tf_module_symplegma
    └── terraform.tfvars

Customizing the infrastructure

Terraform variable files come with sensible default.

If you wish to change remote state configuration you can edit $CLUSTER_NAME/terraform.tfvars

If you wish to customize the infrastructure you can edit $CLUSTER_NAME/tf_module_symplegma/terraform.tfvars

One of the most important variable is cluster_name that allows you tu use OpenStack dynamic inventory with multiple cluster. We recommend this variables to be coherent throughout your files and equals to $CLUSTER_NAME defined earlier.

There is also a set of sensible default tags that you can customize such as Environment for example or add your own.

To avoid bloating the configuration files and unnecessary hard coded values, Terraform provider credentials are derived from your OpenStack SDK config. Make sure you are using the correct OpenStack credentials by setting your OS_CLOUD environment variable. [more infos]

Initializing the infrastructure

Once everything is configured to your needs, just run:

terragrunt apply-all --terragrunt-source-update

Couples minute later you should see your instances spawning in your Horizon dashboard.

Deploying Kubernetes with symplegma playbooks

OpenStack Dynamic inventory

OpenStack dynamic inventory allows you to target a specific set of instances depending on the $CLUSTER_NAME you set earlier. You can configure the behavior of dynamic inventory by setting the following ENV:

  • export SYMPLEGMA_CLUSTER=$CLUSTER_NAME : Target only instances belonging to this cluster.

To test the behavior of the dynamic inventory just run:

./inventory/openstack/${CLUSTER_NAME}/openstack.py --list

It should only return a specific subset of your instances.

Info

These variables can be exported automatically when using the deployment script, but they can still be set manually for testing / manual deployment purposes.

Customizing Kubernetes deployment

In the cluster folder, it is possible to edit Ansible variables:

  • group_vars/all/all.yml: contains default Ansible variables.
---
bootstrap_python: false
# Install portable python distribution that do not provide python (eg.
# coreos/flatcar):
# bootstrap_python: true
# ansible_python_interpreter: /opt/bin/python

ansible_ssh_user: ubuntu

ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
# To use a bastion host between node and ansible use:
# ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o ProxyCommand="ssh -o StrictHostKeyChecking=no -W %h:%p -q ubuntu@{{ ansible_ssh_bastion_host }}"'
# ansible_ssh_bastion_host: __BASTION_IP__

kubeadm_version: v1.24.1
kubernetes_version: v1.24.1
# If deploying HA clusters, specify the loadbalancer IP or domain name and port
# in front of the control plane nodes:
# kubernetes_api_server_address: __LB_HOSTNAME__
# kubernetes_api_server_port: __LB_LISTENER_PORT__

bin_dir: /usr/local/bin
# Change default path for custom binary. On OS with immutable file system (eg.
# coreos/flatcar) use a writable path
# bin_dir: /opt/bin

# Customize API server
kubeadm_api_server_extra_args: {}
kubeadm_api_server_extra_volumes: {}

# Customize controller manager scheduler
# eg. to publish prometheus metrics on "0.0.0.0":
# kubeadm_controller_manager_extra_args: |
#   address: 0.0.0.0
kubeadm_controller_manager_extra_args: {}
kubeadm_controller_manager_extra_volumes: {}

# Customize scheduler manager scheduler
# eg. to publish prometheus metrics on "0.0.0.0":
# kubeadm_scheduler_extra_args: |
#   address: 0.0.0.0
kubeadm_scheduler_extra_volumes: {}
kubeadm_scheduler_extra_args: {}

# Customize Kubelet
# `kubeadm_kubelet_extra_args` is to be used as a last resort,
# `kubeadm_kubelet_component_config` configure kubelet wth native kubeadm API,
# please see
# https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for
# more information
kubeadm_kubelet_component_config: {}
kubeadm_kubelet_extra_args: {}


# Customize Kube Proxy configuration using native Kubeadm API
# eg. to publish prometheus metrics on "0.0.0.0":
# kubeadm_kube_proxy_component_config: |
#   metricsBindAddress: 0.0.0.0
kubeadm_kube_proxy_component_config: {}

# Additionnal subject alternative names for the API server
# eg. to add aditionnals domains:
# kubeadm_api_server_cert_extra_sans: |
#   - mydomain.example.com
kubeadm_api_server_cert_extra_sans: {}

kubeadm_cluster_name: symplegma

# Do not label master nor taint (skip kubeadm phase)
# kubeadm_mark_control_plane: false

# Enable systemd cgroup for Kubelet and container runtime
# DO NOT CHANGE this on an existing cluster: Changing the cgroup driver of a
# Node that has joined a cluster is strongly not recommended. If the kubelet
# has created Pods using the semantics of one cgroup driver, changing the
# container runtime to another cgroup driver can cause errors when trying to
# re-create the Pod sandbox for such existing Pods. Restarting the kubelet may
# not solve such errors. Default is to use cgroupfs.
# systemd_cgroup: true

container_runtime: containerd

Info

ansible_ssh_bastion_host, kubernetes_api_server_address and kubernetes_api_server_port can be automatically populated when using the deployment script but they can still be set manually for testing / manual deployment purposes.

  • extra_vars: contains OpenStack cloud provider specific variables that you can override.
---
kubeadm_api_server_extra_args: |
  cloud-provider: "openstack"

kubeadm_controller_manager_extra_args: |-
    cloud-provider: "openstack"
    configure-cloud-routes: "false"

kubeadm_scheduler_extra_args: {}
kubeadm_api_server_extra_volumes: {}
kubeadm_controller_manager_extra_volumes: {}
kubeadm_scheduler_extra_volumes: {}
kubeadm_kubelet_extra_args: |
  cloud-provider: "openstack"

calico_mtu: 1430
calico_ipv4pool_ipip: "Always"
calico_felix_ipip: "true"

Info

If you need to override control plane or kubelet specific parameters do it in extra_vars.yml as it overrides all other variables previously defined as per Ansible variables precedence documentantion

Running the playbooks with deployment script

A simple (really, it cannot be simpler) deployment script can call Ansible and compute the necessary Terraform output for you:

#! /bin/sh

INVENTORY_DIR=$(dirname "${0}")

export SYMPLEGMA_CLUSTER="$( cd "${INVENTORY_DIR}" && terragrunt output-all cluster_name 2>/dev/null )"

ansible-playbook -i "${INVENTORY_DIR}"/hosts.ini symplegma-init.yml -b -v \
  -e @"${INVENTORY_DIR}"/extra_vars.yml \
  -e ansible_ssh_bastion_host="$( cd "${INVENTORY_DIR}" && terragrunt output-all bastion_fips 2>/dev/null )" \
  "$@"

From the root of the repository just run your cluster deployment script:

./inventory/openstack/${CLUSTER_NAME}/symplegma-ansible.sh

Testing cluster access

When the deployment is over, admin.conf should be exported in kubeconfig/$CLUSTER_NAME/admin.conf. You should be able to call the Kubernetes API with kubectl:

export KUBECONFIG=$(pwd)/kubeconfig/${CLUSTER_NAME}/admin.conf

kubectl get nodes

NAME                                       STATUS   ROLES    AGE     VERSION
ip-10-0-1-140.eu-west-1.compute.internal   Ready    <none>   2d22h   v1.13.0
ip-10-0-1-22.eu-west-1.compute.internal    Ready    master   2d22h   v1.13.0
ip-10-0-2-47.eu-west-1.compute.internal    Ready    <none>   2d22h   v1.13.0
ip-10-0-2-8.eu-west-1.compute.internal     Ready    master   2d22h   v1.13.0
ip-10-0-3-123.eu-west-1.compute.internal   Ready    master   2d22h   v1.13.0

EOD