TK8
  • README
  • docs
    • de
      • add-on
        • introduction
        • development
      • provisioner
        • aws
          • aws
          • introduction
        • TK8 Provisioner
        • baremetal
          • Overview
          • Overview
        • openstack
          • introduction
          • openshift-openstack
          • openstack
        • nutanix
          • introduction
      • docker
        • introduction
      • SUMMARY
      • cluster
        • introduction
    • en
      • cluster
        • introduction
      • provisioner
        • rke
          • usage
        • baremetal
          • Overview
          • Overview
        • aws
          • eks
          • lifecycle
          • docker
          • cli
          • introduction
        • openstack
          • openshift-openstack
          • openstack
          • introduction
        • TK8 Commissioner
      • docker
        • introduction
      • SUMMARY
      • add-on
        • introduction
        • development
  • .github
    • ISSUE_TEMPLATE
      • feature_request
      • bug_report
  • README_de
Powered by GitBook
On this page
  • Create a cluster
  • Prerequisites using EKS
  • Destroy the provisioned cluster
  1. docs
  2. en
  3. provisioner
  4. aws

eks

PreviousawsNextlifecycle

Last updated 6 years ago

Provide the AWS credentials in following ways:

  • . You will need to specify AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.

Create a cluster

Adapt the config.yaml file to specify the cluster details. :

eks:
  cluster-name: "kubernauts-eks"
  aws_region: "us-west-2"
  node-instance-type: "m4.large"
  desired-capacity: 1
  autoscalling-max-size: 2
  autoscalling-min-size: 1  
  key-file-path: "~/.ssh/id_rsa.pub"

Prerequisites using EKS

  • Existing SSH keypair in AWS

  • AWS access and secret keys

  • Exported AWS Credentials

Once done run:

tk8 cluster install eks

or with Docker

docker run -v <path-to-the-AWS-SSH-key>:/root/.ssh/ -v "$(pwd)":/tk8 -e AWS_ACCESS_KEY_ID=xxx -e AWS_SECRET_ACCESS_KEY=XXX kubernautslabs/tk8 cluster install eks

Post installation the kubeconfig will be available at: $(pwd)/inventory/yourWorkspaceOrClusterName/provisioner/kubeconfig

Do not delete the inventory directory post installation as the cluster state will be saved in it.

Destroy the provisioned cluster

Make sure you are in same directory where you executed tk8 cluster install eks with the inventory directory. If you use a different workspace name with the --name flag please provide it on destroying too.

To delete the provisioned cluster run:

tk8 cluster destroy eks

or with Docker

docker run -v <path-to-the-AWS-SSH-key>:/root/.ssh/ -v "$(pwd)":/tk8 -e AWS_ACCESS_KEY_ID=xxx -e AWS_SECRET_ACCESS_KEY=XXX kubernautslabs/tk8 cluster destroy eks
Environment Variables
Example config
Git
Terraform
Ansible
kubectl
Python
pip
AWS IAM Authenticator