Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ROSA: multiple HA deployments within the same subnet group can exhaust IPs for other HA deployments #978

Open
kami619 opened this issue Sep 13, 2024 · 0 comments
Labels
kind/bug Something isn't working status/triage

Comments

@kami619
Copy link
Contributor

kami619 commented Sep 13, 2024

Describe the bug

In a particular deployment case, I was trying to do a 3rd HA deployment on the same gh-keycloak* cluster pair and it resulted in the below issue during the task global-accelerator-create task.

> oc -n XXXX-keycloak get svc accelerator-loadbalancer
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
accelerator-loadbalancer   LoadBalancer   XXXX            <pending>     443:XXXX/TCP    49s

The relevant subnet-group only had 7 IPs free within the XXXX/28 IPV4 CIDR block, while the 3rd deployment needed 8 of them. It could be that the other two prior deployments consumed 16 IPs between both of them on that particular CIDR block, which results in the below error when you describe the accelerator service that is waiting for an IP allocation and seems to be placed in a non-resolving pending state.

Events:
  Type     Reason                  Age                    From                Message
  ----     ------                  ----                   ----                -------
  Warning  SyncLoadBalancerFailed  8m45s                  service-controller  Error syncing load balancer: failed to ensure load balancer: error creating load balancer: "InvalidSubnet: Not enough IP space available in subnet-XXXX. ELB requires at least 8 free IP addresses in each subnet.\n\tstatus code: 400, request id: XXXX"
  Warning  SyncLoadBalancerFailed  8m39s                  service-controller  Error syncing load balancer: failed to ensure load balancer: error creating load balancer: "InvalidSubnet: Not enough IP space available in subnet-XXXX. ELB requires at least 8 free IP addresses in each subnet.\n\tstatus code: 400, request id: XXXX"
  Warning  SyncLoadBalancerFailed  8m29s                  service-controller  Error syncing load balancer: failed to ensure load balancer: error creating load balancer: "InvalidSubnet: Not enough IP space available in subnet-XXXX. ELB requires at least 8 free IP addresses in each subnet.\n\tstatus code: 400, request id: XXXX"
  Warning  SyncLoadBalancerFailed  8m8s                   service-controller  Error syncing load balancer: failed to ensure load balancer: error creating load balancer: "InvalidSubnet: Not enough IP space available in subnet-XXXX. ELB requires at least 8 free IP addresses in each subnet.\n\tstatus code: 400, request id: XXXX"
  Warning  SyncLoadBalancerFailed  7m28s                  service-controller  Error syncing load balancer: failed to ensure load balancer: error creating load balancer: "InvalidSubnet: Not enough IP space available in subnet-XXXX. ELB requires at least 8 free IP addresses in each subnet.\n\tstatus code: 400, request id: XXXX"
  Warning  SyncLoadBalancerFailed  6m7s                   service-controller  Error syncing load balancer: failed to ensure load balancer: error creating load balancer: "InvalidSubnet: Not enough IP space available in subnet-XXXX. ELB requires at least 8 free IP addresses in each subnet.\n\tstatus code: 400, request id: XXXX"
  Normal   EnsuringLoadBalancer    3m27s (x7 over 8m46s)  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  3m26s                  service-controller  Error syncing load balancer: failed to ensure load balancer: error creating load balancer: "InvalidSubnet: Not enough IP space available in subnet-XXXX. ELB requires at least 8 free IP addresses in each subnet.\n\tstatus code: 400, request id: XXXX"

Version

keycloak-benchmark latest main

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Anything else?

No response

@kami619 kami619 added kind/bug Something isn't working status/triage labels Sep 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working status/triage
Projects
None yet
Development

No branches or pull requests

1 participant