Skip to content

GUE healthcheck fails with self IP #139

@alptugay

Description

@alptugay

We are trying to deploy our load-balancers as an all in one box, which means all our load-balancers will run both a L4 and a L7 loadbalancing dameon. While deploying GLB we noticed that each GLB instance on our all-in-one LB host, can redirect packets to other load-balancers' L7 ports, but not to it's own L7 port because GUE healthcheck fails.

This is the output of the forwarding_table.src.json file in all our all-in-one loadbalancers (192.168.152.40, 192.168.152.41, 192.168.152.43)


{
  "tables": [
    {
      "name": "first_table",
      "hash_key": "12345678901234561234567890123456",
      "seed": "34567890123456783456789012345678",
      "binds": [
        { "ip": "130.30.30.30", "proto": "tcp", "port": 80 },
        { "ip": "fdb4:98ce:52d4::42", "proto": "tcp", "port": 80 }
      ],
      "backends": [
        { "ip": "192.168.152.40", "state": "active", "healthchecks": {"http": 80, "gue": 19523} },
        { "ip": "192.168.152.41", "state": "active", "healthchecks": {"http": 80, "gue": 19523} },
        { "ip": "192.168.152.43", "state": "active", "healthchecks": {"http": 80, "gue": 19523} }
      ]
    }
  ]
}

And this is the output of our ip configuration:


3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdpgeneric/id:51 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:d2:1b:1e brd ff:ff:ff:ff:ff:ff
    inet 192.168.152.40/24 brd 192.168.152.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fed2:1b1e/64 scope link
       valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 130.30.30.30/32 scope global tunl0
       valid_lft forever preferred_lft forever

The output of ip fou show
port 19523 gue

The output of /etc/glb/forwarding_table.checked.json

{
  "healthchecks": null,
  "tables": [
    {
      "name": "first_table",
      "hash_key": "12345678901234561234567890123456",
      "seed": "34567890123456783456789012345678",
      "binds": [
        {
          "ip": "130.30.30.30",
          "proto": "tcp",
          "port": 80
        },
        {
          "ip": "fdb4:98ce:52d4::42",
          "proto": "tcp",
          "port": 80
        }
      ],
      "backends": [
        {
          "ip": "192.168.152.40",
          "state": "active",
          "healthy": false,
          "healthchecks": {
            "http": 80,
            "gue": 19523
          }
        },
        {
          "ip": "192.168.152.41",
          "state": "active",
          "healthy": true,
          "healthchecks": {
            "http": 80,
            "gue": 19523
          }
        },
        {
          "ip": "192.168.152.43",
          "state": "active",
          "healthy": true,
          "healthchecks": {
            "http": 80,
            "gue": 19523
          }
        }
      ]
    }
  ]
}

The output of this file is taken form the LB instance with the IP address of "192.168.152.40" and as you can see 192.168.152.40 is showed as unhealthy. On another instance for example on "192.168.152.41", "192.168.152.40" and "192.168.152.43" is seen as healthy whereas "192.168.152.41" is seen as unhealthy.

Any help is appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions