Deploying Kubernetes on Azure with CoreOS and Weave Networking
This guide walks through deploying a Kubernetes cluster on Microsoft Azure using CoreOS as the operating system and Weave for container networking. Weave provides secure, simple, and transparent network connectivity between pods acros nodes. The setup described here is designed to be production-ready with minimal modifications.
Prerequisites
- An active Azure account
- Node.js installed locally (required for deployment scripts)
Initial Setup
Begin by cloning the official Kubernetes repository and navigating to the Azure/CoreOS deployment directory:
git clone https://github.com/kubernetes/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
Install the necessary Node.js dependencies:
npm install
Authenticate with Azure using the provided helper script:
./azure-login.js -u <your_azure_username>
Launch the cluster creation script:
./create-kubernetes-cluster.js
This provisions a cluster consisting of:
- Three dedicated etcd nodes (
etcd-00,etcd-01,etcd-02) in a ring topology - One Kubernetes master node (
kube-00) - Two worker nodes (
kube-01,kube-02)
All VMs are initially configured as single-core instances to remain within Azure’s free tier limits. Upon completion, the script outputs SSH configuraton details and a deployment state file, for example:
azure_wrapper/info: Saved SSH config: `ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00`
azure_wrapper/info: Hosts: [ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: State saved to: ./output/kube_1c1496016083b4_deployment.yml
Connect to the master node:
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
Verify that worker nodes are registered and ready:
core@kube-00 ~ $ kubectl get nodes
NAME STATUS LABELS
kube-01 Ready kubernetes.io/hostname=kube-01
kube-02 Ready kubernetes.io/hostname=kube-02
Deploying a Sample Application
Deploy the Kubernetes Guestbook example:
kubectl create -f ~/guestbook-example
Monitor pod status until all transition from Pending to Running:
kubectl get pods --watch
Once complete, you should see six running pods: three frontend, one Redis master, and two Redis slaves.
Scaling the Cluster
To simulate a production workload, scale the cluster by adding larger VMs. In a new terminal, set the desired VM size:
export AZ_VM_SIZE=Large
Run the scaling script using the latest deployment state file:
./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
This adds two new worker nodes (kube-03, kube-04). Confirm they appear in the node list:
core@kube-00 ~ $ kubectl get nodes
Scale the application to utilize the additional capacity:
kubectl scale --replicas=4 rc redis-slave
kubectl scale --replicas=4 rc frontend
Verify the updated replica counts:
core@kube-00 ~ $ kubectl get rc
Check that the new frontend pods are scheduled across all four worker nodes:
kubectl get pods -l name=frontend
Exposing the Application Publicly
Since native Azure LoadBalancer integration wasn’t available in Kubernetes 1.0, use the provided script to map the Guestbook service port to the master node’s public IP:
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
The script creates a TCP endpoint mapping port 80 to the internal service port (e.g., 31605). The output includes the public VIP:
Virtual IP Address : 137.117.156.164
Access the application at http://137.117.156.164/.
Cleanup
To avoid ongoing Azure charges, destroy the entire cluster using the most recent deployment file:
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
Note: Always use the latest _deployment.yml file generated after any scaling operasion.