Skip to content

AWS EKS Deployment Guide

Infrastructure Setup

Prerequisites

  • AWS CLI v2.0+
  • eksctl v0.138.0+
  • kubectl v1.25+
  • Helm v3.0+
  • AWS IAM Authenticator

Initial Setup

  1. Configure AWS CLI:
bash
aws configure
  1. Create an EKS cluster:
bash
eksctl create cluster \
  --name prime-edm-cluster \
  --region us-west-2 \
  --version 1.25 \
  --nodegroup-name standard-workers \
  --node-type t3.medium \
  --nodes 3 \
  --nodes-min 1 \
  --nodes-max 5 \
  --managed

Network Configuration

  1. Create VPC (if not using eksctl):
bash
aws cloudformation deploy \
  --template-file eks-vpc.yaml \
  --stack-name eks-vpc
  1. Configure Security Groups:
bash
aws ec2 create-security-group \
  --group-name eks-cluster-sg \
  --description "EKS cluster security group"

Secrets Management

AWS Secrets Manager Setup

  1. Create necessary IAM roles:
bash
aws iam create-role \
  --role-name eks-secrets-role \
  --assume-role-policy-document file://trust-policy.json
  1. Attach required policies:
bash
aws iam attach-role-policy \
  --role-name eks-secrets-role \
  --policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite
  1. Create a secret:
bash
aws secretsmanager create-secret \
  --name prod/db/credentials \
  --secret-string '{"username":"admin","password":"secret"}'

External Secrets Configuration

yaml
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: aws-backend
spec:
  provider:
    aws:
      service: SecretsManager
      region: us-west-2
      auth:
        secretRef:
          accessKeyIDSecretRef:
            name: aws-secret-access
            key: access-key
          secretAccessKeySecretRef:
            name: aws-secret-access
            key: secret-key

Ingress Configuration

ALB Controller Setup

  1. Install ALB Controller:
bash
helm repo add eks https://aws.github.io/eks-charts
helm repo update

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=prime-edm-cluster
  1. Configure Ingress:
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: prime-edm-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: prime-edm-service
                port:
                  number: 80

Production Configuration

High Availability Setup

  1. Configure node groups for HA:
bash
eksctl create nodegroup \
  --cluster prime-edm-cluster \
  --name prod-nodes \
  --node-type t3.large \
  --nodes 3 \
  --nodes-min 3 \
  --nodes-max 6 \
  --asg-access
  1. Enable cluster autoscaler:
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      containers:
        - image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.25.0
          name: cluster-autoscaler
          command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --expander=least-waste
            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/prime-edm-cluster

Monitoring Setup

  1. Install CloudWatch agent:
bash
kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest.yaml

Backup Configuration

  1. Install Velero:
bash
velero install \
  --provider aws \
  --plugins velero/velero-plugin-for-aws:v1.5.0 \
  --bucket prime-edm-backup \
  --backup-location-config region=us-west-2 \
  --snapshot-location-config region=us-west-2

Cost Optimization

Resource Management

  1. Configure resource requests and limits:
yaml
resources:
  requests:
    cpu: 250m
    memory: 512Mi
  limits:
    cpu: 500m
    memory: 1Gi
  1. Use Spot Instances:
bash
eksctl create nodegroup \
  --cluster prime-edm-cluster \
  --name spot-nodes \
  --node-type t3.medium \
  --nodes 2 \
  --node-volume-size 20 \
  --node-labels capability=spot \
  --node-zones us-west-2a,us-west-2b \
  --instance-types t3.medium,t3.large \
  --spot

Troubleshooting

Common Issues

  1. ALB Controller Issues:

    • Check IAM roles and policies
    • Verify security group configurations
    • Review ALB controller logs
  2. Node Group Issues:

    • Check CloudWatch logs
    • Verify IAM roles
    • Review ASG configurations
  3. Networking Issues:

    • Verify VPC configurations
    • Check security group rules
    • Review route tables

Debug Commands

bash
# Check ALB controller logs
kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller

# Verify node status
kubectl get nodes -o wide

# Check pod status
kubectl get pods --all-namespaces

# View service endpoints
kubectl get endpoints

Released under the MIT License.