In the realm of modern application deployment, ensuring smooth and efficient updates is paramount. Downtime during updates can lead to lost revenue, customer dissatisfaction, and reputational damage. For organizations leveraging Kubernetes on AWS, combining Argo Rollouts with the AWS Load Balancer Controller offers a powerful solution for performing Aws Server Update Rollouts with minimal to zero downtime.
This article delves into how to utilize the AWS Load Balancer Controller with Argo Rollouts to manage traffic during application updates. We will explore the concepts, configurations, and best practices to implement robust and reliable rollout strategies for your Kubernetes services running on AWS.
Understanding the Foundation: AWS Load Balancer Controller and Argo Rollouts
Before diving into the specifics, let’s establish a clear understanding of the key components involved:
- AWS Load Balancer Controller (ALB Ingress Controller): This controller is a crucial component in Kubernetes environments on AWS. It automatically provisions and manages AWS Application Load Balancers (ALBs) to route external traffic to your Kubernetes services. By configuring Ingress objects, you can define rules that dictate how traffic is directed to your applications running within the cluster.
- Argo Rollouts: Argo Rollouts is a Kubernetes controller that provides advanced deployment strategies beyond the standard Kubernetes rollout. It specializes in progressive delivery techniques like canary deployments and blue/green deployments, enabling safer and more controlled application updates.
By integrating these two powerful tools, you can orchestrate sophisticated aws server update rollouts that leverage the advanced traffic management capabilities of AWS ALB and the progressive delivery strategies of Argo Rollouts.
How AWS ALB Controller and Argo Rollouts Work Together for Rollouts
The synergy between AWS ALB Controller and Argo Rollouts for aws server update rollouts is achieved through the ALB’s ability to perform weighted target group routing. Here’s a breakdown of the process:
-
Ingress Configuration: You define an Ingress resource in Kubernetes, managed by the AWS Load Balancer Controller. This Ingress specifies rules for routing traffic based on paths, hostnames, and other criteria. Importantly, it utilizes annotations provided by Argo Rollouts to control traffic splitting between different Kubernetes services.
-
Target Groups and Actions: AWS ALBs operate using Listeners, Rules, and Actions. Listeners define how client traffic enters, while Rules dictate actions to be taken based on request attributes. Actions include forwarding traffic to Target Groups. Target Groups, in this context, represent Kubernetes services. The ALB can forward traffic to multiple Target Groups with weights, enabling traffic splitting.
-
Argo Rollouts’ Role in Traffic Shaping: During an aws server update rollout, Argo Rollouts dynamically adjusts the traffic distribution between the old (stable) and new (canary) versions of your application. It achieves this by modifying a specific annotation on the Ingress object:
alb.ingress.kubernetes.io/actions.
. -
Automatic Annotation Injection: Argo Rollouts automatically injects and updates this
actions
annotation. This annotation contains a JSON payload that instructs the AWS Load Balancer Controller to split traffic between the canary and stable services based on the desired traffic weights defined in your Rollout strategy.
Image showing the AWS Load Balancer Controller architecture, illustrating how it manages AWS Application Load Balancers and integrates with Kubernetes Ingress resources.
Implementing AWS ALB Traffic Routing in Argo Rollouts
To configure Argo Rollouts to utilize AWS ALB for traffic management during aws server update rollouts, you need to define the trafficRouting
strategy within your Rollout specification. Here’s a configuration example:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
spec:
strategy:
canary:
canaryService: canary-service
stableService: stable-service
trafficRouting:
alb:
ingress: ingress
servicePort: 443
Key Configuration Parameters:
canaryService
andstableService
: These fields are essential and point to the Kubernetes Services that correspond to your canary and stable ReplicaSets. Argo Rollouts will modify these services to direct traffic to the respective ReplicaSets.trafficRouting.alb.ingress
: This specifies the name of the Ingress resource that Argo Rollouts will manage for traffic splitting.trafficRouting.alb.servicePort
: This defines the port that your services are listening on.
Ingress Resource Definition:
The referenced Ingress resource needs to be configured with a rule that aligns with your Rollout service. Here’s an example Ingress configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: alb
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: root-service
port:
name: use-annotation
Important Ingress Settings:
kubernetes.io/ingress.class: alb
: This annotation ensures that the AWS Load Balancer Controller manages this Ingress.backend.service.port.name: use-annotation
: This crucial setting instructs the AWS Load Balancer Controller to rely on annotations for traffic direction, which Argo Rollouts will dynamically manage.backend.service.name
: ThisserviceName
should typically match eithercanary.trafficRouting.alb.rootService
(if specified in Rollout) orcanary.stableService
(ifrootService
is omitted). In this example, it’s set toroot-service
.
Example of Injected Annotation:
During an aws server update rollout, Argo Rollouts will inject the alb.ingress.kubernetes.io/actions.
annotation. Here’s an example of how this annotation might look after Argo Rollouts configures traffic splitting with a 10% canary weight and 90% stable weight:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/actions.root-service: |
{
"Type":"forward",
"ForwardConfig":{
"TargetGroups":[
{
"Weight":10,
"ServiceName":"canary-service",
"ServicePort":"80"
},
{
"Weight":90,
"ServiceName":"stable-service",
"ServicePort":"80"
}
]
}
}
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: root-service
port:
name: use-annotation
Argo Rollouts also adds the rollouts.argoproj.io/managed-alb-actions
annotation for internal bookkeeping, tracking which actions it’s managing on the Ingress.
Advanced Features for Robust AWS Server Update Rollouts
Argo Rollouts and AWS ALB integration offer several advanced features to enhance the reliability and safety of your aws server update rollouts.
Root Service Configuration
By default, Argo Rollouts uses the stableService
name to generate the action name within the alb.ingress.kubernetes.io/actions.
annotation. However, you can explicitly specify a different service name using the rootService
field. This is beneficial in scenarios where you have a single Ingress managing multiple services, such as when implementing separate routes for canary, stable, and root services for testing or A/B testing purposes.
apiVersion: argoproj.io/v1alpha1
kind: Rollout
spec:
strategy:
canary:
canaryService: guestbook-canary
stableService: guestbook-stable
trafficRouting:
alb:
rootService: guestbook-root
# ... other alb configurations
Sticky Sessions for Consistent User Experience
When performing canary deployments, session stickiness is often desired to ensure users have a consistent experience during the rollout. With AWS ALB and Argo Rollouts, you can enable sticky sessions by configuring the stickinessConfig
within your Rollout spec:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
spec:
strategy:
canary:
trafficRouting:
alb:
stickinessConfig:
enabled: true
durationSeconds: 3600
# ... other alb configurations
This configuration activates sticky sessions on the ALB target group, ensuring that users are consistently routed to the same backend (canary or stable) for the specified duration.
Zero-Downtime Updates with Target Group Verification
To further enhance zero-downtime aws server update rollouts, Argo Rollouts provides TargetGroup verification features:
TargetGroup IP Verification
This feature is crucial when using the AWS Load Balancer Controller in IP mode, where ALBs target individual pod IPs. In IP mode, there’s a potential risk of downtime if the TargetGroup’s pod IP registrations become outdated, especially during rapid scaling events.
TargetGroup IP verification ensures that Argo Rollouts verifies that the ALB TargetGroup accurately reflects the pod IPs of the bluegreen.activeService
or canary.stableService
before proceeding with the rollout. This verification happens by querying AWS APIs and comparing the registered IPs in the TargetGroup with the pod IPs in the Kubernetes Endpoints list.
TargetGroup Weight Verification
TargetGroup weight verification addresses scenarios where, due to external factors like AWS rate limiting or controller downtime, weight adjustments made to the Ingress annotation might not immediately propagate to the underlying ALB TargetGroup.
This feature ensures that after Argo Rollouts sets a canary weight, it verifies via AWS APIs that the weights in the ALB TargetGroup are indeed updated to the desired values before proceeding with further rollout steps.
Enabling TargetGroup Verification:
To enable these verification features, you need to add the --aws-verify-target-group
flag to the Argo Rollouts controller deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: argo-rollouts
spec:
template:
spec:
containers:
- name: argo-rollouts
args: [--aws-verify-target-group]
Required AWS Permissions:
For TargetGroup verification to function, the Argo Rollouts controller needs specific AWS API permissions for the Elastic Load Balancing service. Ensure your Argo Rollouts deployment has the necessary IAM permissions, including actions like elasticloadbalancing:DescribeTargetGroups
, elasticloadbalancing:DescribeLoadBalancers
, and others listed in the original documentation.
Ping-Pong for Zero-Downtime with Pod Readiness Gates
The Ping-Pong feature in Argo Rollouts addresses a specific challenge related to AWS recommended approach for zero-downtime updates using pod readiness gate injection. While readiness gates are beneficial, modifications to Service selectors can prevent the AWS Load Balancer Controller from injecting readiness gates effectively.
Ping-Pong services provide a workaround. Instead of modifying Service selectors, Ping-Pong utilizes two services (e.g., “ping” and “pong”). During the rollout, one service acts as the stable service, and the other as the canary. At the end of the rollout, traffic is fully shifted to the “canary” service, and then Argo Rollouts “swaps the hats” of the ping and pong services. This approach allows for consistent readiness gate injection as the underlying services aren’t constantly changing their selectors during the rollout.
Enabling Ping-Pong:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: example-rollout
spec:
# ... other rollout configurations
strategy:
canary:
pingPong: # Indicates ping-pong services are enabled
pingService: ping-service
pongService: pong-service
trafficRouting:
alb:
ingress: alb-ingress
servicePort: 80
# ... other alb configurations
Customization Options
Argo Rollouts provides customization options to adapt to different AWS ALB Controller configurations.
Custom Annotations Prefix
If your AWS Load Balancer Controller is configured to use a custom annotation prefix (instead of the default alb.ingress.kubernetes.io
), you can specify this prefix in your Rollout:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
spec:
strategy:
canary:
trafficRouting:
alb:
annotationPrefix: custom.alb.ingress.kubernetes.io
Custom Ingress Class
By default, Argo Rollouts interacts with Ingress resources that use the kubernetes.io/ingress.class: alb
annotation or ingressClassName: alb
. If you’re using a different Ingress class name, you can configure Argo Rollouts to recognize it using the --alb-ingress-classes
flag in the controller arguments. You can specify this flag multiple times to support multiple Ingress classes, or use an empty string (--alb-ingress-classes ''
) to operate on Ingresses without any class specified.
Conclusion: Streamlining AWS Server Update Rollouts
Integrating Argo Rollouts with the AWS Load Balancer Controller empowers you to achieve sophisticated and reliable aws server update rollouts in your Kubernetes environments on AWS. By leveraging the ALB’s traffic management capabilities and Argo Rollouts’ progressive delivery strategies, you can minimize risk, ensure zero-downtime updates, and enhance the overall user experience. Whether you are implementing canary deployments, blue/green updates, or utilizing advanced features like TargetGroup verification and Ping-Pong, this combination provides a robust and flexible solution for modern application delivery on AWS.
By carefully configuring your Rollout and Ingress resources and understanding the underlying mechanisms, you can confidently perform aws server update rollouts with Argo Rollouts and AWS ALB, ensuring your applications remain highly available and responsive to user needs.