AKS Terraform¶
Resources¶
- https://docs.microsoft.com/en-us/azure/terraform/terraform-create-k8s-cluster-with-tf-and-aks
- https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html
Pre-Requisites¶
Create Service Principle¶
It comes from this guide.
Retrieve current Kubernetes Versions¶
Terraform Config¶
Create storage account for TF State¶
LOCATION=westeurope
RESOURCE_GROUP_NAME=joostvdg-cb-ext-storage
STORAGE_ACCOUNT_NAME=joostvdgcbtfstate
CONTAINER_NAME=tfstate
List locations¶
Create resource group¶
Create storage account¶
az storage account create \
--name ${STORAGE_ACCOUNT_NAME} \
--resource-group ${RESOURCE_GROUP_NAME} \
--location ${LOCATION} \
--sku Standard_ZRS \
--kind StorageV2
Retrieve storage account login¶
Apparently, no CLI commands available?
Use the Azure Blog on AKS via Terraform for how via the UI.
Create TF Storage¶
az storage container create -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME} --account-key ${STORAGE_ACCOUNT_KEY}
Init Terraform backend¶
terraform init -backend-config="storage_account_name=${STORAGE_ACCOUNT_NAME}" \
-backend-config="container_name=${CONTAINER_NAME}" \
-backend-config="access_key=${STORAGE_ACCOUNT_KEY}" \
-backend-config="key=codelab.microsoft.tfstate"
Expose temp variables¶
These are from your Service Principle we created earlier. Where client_id
= appId, and client_secret
the password.
Rollout¶
Set variables¶
Validate¶
Plan¶
Apply the plan¶
Get kubectl config¶
Enable Preview Features¶
Currently having cluster autoscalers requires enabling of a Preview Feature in Azure.
The same holds true for enabling multiple node pools, which I think is a best practice for using Kubernetes.
Terraform Code¶
Important
When using Terraform for AKS and you want to use Multiple Node Pools and/or the Cluster Autoscaler, you need to use the minimum of 1.32.0
of the azurerm
provider.
main.tf
k8s.tf
resource "azurerm_kubernetes_cluster" "k8s" {
name = "acctestaks1"
location = "${azurerm_resource_group.k8s.location}"
resource_group_name = "${azurerm_resource_group.k8s.name}"
dns_prefix = "jvdg"
kubernetes_version = "${var.kubernetes_version}"
agent_pool_profile {
name = "default"
vm_size = "Standard_D2s_v3"
os_type = "Linux"
os_disk_size_gb = 30
enable_auto_scaling = true
count = 2
min_count = 2
max_count = 3
type = "VirtualMachineScaleSets"
node_taints = ["mytaint=true:NoSchedule"]
}
agent_pool_profile {
name = "pool1"
vm_size = "Standard_D2s_v3"
os_type = "Linux"
os_disk_size_gb = 30
enable_auto_scaling = true
min_count = 1
max_count = 3
type = "VirtualMachineScaleSets"
}
agent_pool_profile {
name = "pool2"
vm_size = "Standard_D4s_v3"
os_type = "Linux"
os_disk_size_gb = 30
enable_auto_scaling = true
min_count = 1
max_count = 3
type = "VirtualMachineScaleSets"
}
role_based_access_control {
enabled = true
}
service_principal {
client_id = "${var.client_id}"
client_secret = "${var.client_secret}"
}
tags = {
Environment = "Development"
CreatedBy = "Joostvdg"
}
}
output "client_certificate" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate}"
}
output "kube_config" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config_raw}"
}
variables.tf
variable "client_id" {}
variable "client_secret" {}
variable "kubernetes_version" {
default = "1.14.6"
}
variable "agent_count" {
default = 3
}
variable "ssh_public_key" {
default = "~/.ssh/id_rsa.pub"
}
variable "dns_prefix" {
default = "jvdg"
}
variable cluster_name {
default = "cbcore"
}
variable resource_group_name {
default = "joostvdg-cbcore"
}
variable container_registry_name {
default = "joostvdgacr"
}
variable location {
default = "westeurope"
}
acr.tf
resource "azurerm_resource_group" "ecr" {
name = "${var.resource_group_name}-acr"
location = "${var.location}"
}
resource "azurerm_container_registry" "acr" {
name = "${var.container_registry_name}"
resource_group_name = "${azurerm_resource_group.ecr.name}"
location = "${azurerm_resource_group.k8s.location}"
sku = "Premium"
admin_enabled = false
}
resource-group.tf
Last update: 2019-09-24 15:43:23