I’ve a managed AKS cluster on which i’ve configured Three pods for use as a mock processor (they only obtain the request and maintain for 15 seconds earlier than returning response) service. Together with that i’ve an API which is chargeable for studying message from MS service bus queue and make a request to the mock processors.
The issue i’m seeing is these Three processors are processing random variety of messages from the check lot (e.g. 15 messages). And the load is just not getting distributed evenly throughout all processor pods.
In first check run, i noticed Processor Pod 1 (eight messages), Processor Pod 2 (7), Processor Pod 3 (0).
In 2nd check run, i noticed Processor Pod 1 (6 messages), Processor Pod 2 (5), Processor Pod 3 (4).
Does anybody is aware of if that is how it’s anticipated to behave?
I’ve been studying about AKS and kube-proxy and located that the default mode for load distribution is iptable. I’m not positive if that is the rationale or one thing else.
Does anybody know the way am i able to management the messages load distribution in order that they get evenly processed throughout all accessible pods?