Ansible AWX on Minikube
What is Ansible AWX
Ansible AWX is the upstream project for Ansible Tower. It adds a platform on top of Ansible. There is a webinterface and a REST-api. With AWX you can create users and assign access to them based on their account or based on the groups they are part of.
Next to fine grained access control, AWX is a central place for all Ansible logging. You can find all output of every run of every playbook.
The restful API is where AWX really shines. Everything that is possible in the webinterface, can be triggered by a call to the API.
Installation methods
The AWX team provided a few installation possibilities:
- Local Docker/Docker-compose
- Kubernetes
- Red Hat OpenShift
These are the only supported ways to run AWX.
Installation using Minikube
Minikube
Overview and installation
For those unfamiliar with Minikube, it is a tool to setup a local Kubernetes cluster with one node. It is ideal to develop application or do some quick tests or installations. The Minikube tool is just a binary that needs to be available in your $PATH
to be able to use it.
Releases for various OS's can be found here.
The be able to connect to your local Kubernetes cluster, we need to install kubectl
, a CLI tool to talk to the cluster.
Installation instruction for various OS's can be found here.
Configuration
The AWX installation guide states the required resources:
- Memory: 6GB
- CPU: 3
Minikube comes with only 1GB of memory and 1 core when using the standard configuration. This is easily changed to fit our needs:
minikube config set cpus 4
minikube config set memory 8192
These commands set the memory and cpus persistent to the provided values. You can also start the Minikube VM with the --cpus
and --memory
options to only start the VM once with these values.
Minikube uses Docker by default to run containers. This is usually fine, but as expirement, we're going to use the cri-o container runtime to setup AWX.
Starting Minikube
Minikube can be started with the following command:
minikube start \
--network-plugin=cni \
--extra-config=kubelet.container-runtime=remote \
--extra-config=kubelet.container-runtime-endpoint=/var/run/crio/crio.sock \
--extra-config=kubelet.image-service-endpoint=/var/run/crio/crio.sock \
--bootstrapper=kubeadm
Verification and disabling Docker
When it's done, kubectl
is configured to contact your local Kubernetes cluster. Verify everything is setup as expected:
kubectl get nodes
It should show something similar to this:
NAME STATUS ROLES AGE VERSION
minikube Ready master 3h v1.10.0
The master node isn't the only thing that should be running, there are various pods from add-ons started.
Don't continue untill
kubectl get pods --all-namespaces
shows all pods running:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-minikube 1/1 Running 0 58s
kube-system kube-addon-manager-minikube 1/1 Running 0 1m
kube-system kube-apiserver-minikube 1/1 Running 0 1m
kube-system kube-controller-manager-minikube 1/1 Running 0 1m
kube-system kube-dns-86f4d74b45-kmtxw 3/3 Running 0 1m
kube-system kube-proxy-4x7qb 1/1 Running 0 1m
kube-system kube-scheduler-minikube 1/1 Running 0 1m
kube-system kubernetes-dashboard-5498ccf677-lbpbc 1/1 Running 0 1m
kube-system storage-provisioner 1/1 Running 0 1m
The Minikube binary provides us with a way to easily access the VM over ssh
:
minikube ssh
When we use systemctl status docker
we can see that it is running, eventhough we set cri-o
as our runtime. We can just stop Docker with sudo systemctl stop docker && sudo systemctl disable docker
.
Issuing systemctl status crio
should show that cri-o
is currently running.
When I tried to add a registry to cri-o
with the Minikube option --insecure-registry=docker.io
, the registry was not added, so let's add it manually.
In the Minikube VM:
sed -i '/^registries/a "docker.io",' /etc/crio/crio.conf
After adding the docker.io
registry, we have to restart cri-o
.
systemctl restart crio
AWX
We now have a working Kubernetes cluster on which we will deploy AWX.
Getting the components
The AWX project provides us with Ansible playbooks for the different installation methods.
git clone https://github.com/ansible/awx.git
The provided roles are using helm to provision a postgresql
instance. Helm is a package manager for Kubernetes. With templates, you can deploy and update services in an automated way. Helm uses tiller
as engine on your Kubernetes cluster, so it has to be running inside your cluster. Once you have Helm installed, the command to start tiller
is easy:
helm init
Before continuing, please verify that tiller
is running.
kubectl get pods --namespace kube-system | grep tiller
Which shows:
tiller-deploy-7cc7bfb9f6-6h44v 1/1 Running 0 8m
Running the playbook
The installer
directory contains playbooks for the different installation types. It's the inventory
file that determines what roles will be used. So let's take a look at the inventory file.
There is only a few things to change:
- Set kubernetes_context to minikube
- Uncomment kubernetes_namespace=awx
- Uncomment use_docker_compose=false
Now we're ready to run the playbook:
ansible-playbook -i inventory install.yml
Two failed tasks during the first run is normal, as this task checks for the existence of the awx
namespace and an existing PostgreSQL service.
Accessing the AWX web interface
After a while (the downloading of the images takes some time), you should have a working AWX instance.
Verify all needed pods are running.
kubectl get pods --namespace awx
It should show:
NAME READY STATUS RESTARTS AGE
awx-67449cc45c-kgpj6 4/4 Running 0 3m
awx-postgresql-575987c895-tknr2 1/1 Running 0 7m
With Minikube, you can access your service like this:
minikube service awx-web-svc --url --namespace awx
This prints the URL where you can access the web interface in your browser.
Login with user admin
and password password
!