HA RabbitMQ Best Practice

Hi Everyone

We already have some nameko services in production all using the HTTP
entrypoints. Now (for performance reasons) we're in the process of moving
some of the internal services to communicate over RPC and we need to setup
HA RabbitMQ before we can move forward.

It would be great to see this sort of information in the docs and maybe I
can help with that - after a little help here first!

We do have the fundamental understandings on what's going on here and what
we're pretty sure of is we don't want the recommended mirrored queues
because with our many service instances the messages are going to be
consumed multiple times. We think we need something more like failover
exchanges and queues.

So if someone is able to clearly explain the best practice pattern for HA
RabbitMQ for nameko then I think this would help many other people here
other than us.

Many thanks.

I'm pretty sure mirrored queues are a requirement for a HA setup. "Failover
exchanges" aren't a thing -- exchanges logically exist on every node in the
cluster, so it doesn't matter which node you're connected to when you
publish. You can also consume from any queue having connected to any node,
but each queue does live on a particular node. Consumers are redirected
inside the cluster to the node that owns the queue it wants to consume
from. Mirroring queues makes slave queues available on other nodes, so
consumers can connect to those if the master node goes away, which is what
gives you high-availability for consumption. If you *don't* mirror queues
you'll see errors about being unable to connect to the master node if it
goes away.

The simplest HA setup is a two-node cluster with a policy that mirrors
every queue onto every node. That will "just work" as long as you're using
modern Nameko (with publish-confirms enabled and consumer heartbeats) and,
if using a load-balancer, you've got it set up sensibly.

Not sure I understand your point about service instances consuming messages
multiple times. Do you mean that you don't want any messages to be
duplicated? A general point about AMQP entrypoints in Nameko is that it
gives you "at least once" delivery semantics (as opposed to "at most once")
-- meaning that in the event of an error you may see messages delivered
more than once but you'll never lose one. This stems from the fact that
entrypoints ack messages after the worker completes, and that Rabbit
reclaims any unack'd messages owned by a consumer if that consumer
disconnects.

···

On Friday, June 16, 2017 at 12:20:10 PM UTC+1, simon harrison wrote:

Hi Everyone

We already have some nameko services in production all using the HTTP
entrypoints. Now (for performance reasons) we're in the process of moving
some of the internal services to communicate over RPC and we need to setup
HA RabbitMQ before we can move forward.

It would be great to see this sort of information in the docs and maybe I
can help with that - after a little help here first!

We do have the fundamental understandings on what's going on here and what
we're pretty sure of is we don't want the recommended mirrored queues
because with our many service instances the messages are going to be
consumed multiple times. We think we need something more like failover
exchanges and queues.

So if someone is able to clearly explain the best practice pattern for HA
RabbitMQ for nameko then I think this would help many other people here
other than us.

Many thanks.

Hi Matt

Thanks for clarifying these things. Apologies for the slow reply - summer
hols etc. etc.

Reading my post again I was thinking about failover queues and not
exchanges (sorry for the confusion) and from what you describe the mirrored
queues should only be consumed from if the master goes away - the behaviour
we want.

So we were experiencing messages delivered >1 times on a regular basis with
mirrored queues. Maybe this does suggest a config issue? We were on 2.5.1
at the time (now 2.6). I'm not sure about the consumer heartbeats. But I
suspect a load balancing problem. Can you explain what "sensibly" means,
and maybe give some insight into what can go wrong if this is not done
sensibly? Could a bad setup lead to the scenario I described?

Thanks

···

On Tuesday, 20 June 2017 08:51:57 UTC+1, Matt Yule-Bennett wrote:

I'm pretty sure mirrored queues are a requirement for a HA setup.
"Failover exchanges" aren't a thing -- exchanges logically exist on every
node in the cluster, so it doesn't matter which node you're connected to
when you publish. You can also consume from any queue having connected to
any node, but each queue does live on a particular node. Consumers are
redirected inside the cluster to the node that owns the queue it wants to
consume from. Mirroring queues makes slave queues available on other nodes,
so consumers can connect to those if the master node goes away, which is
what gives you high-availability for consumption. If you *don't* mirror
queues you'll see errors about being unable to connect to the master node
if it goes away.

The simplest HA setup is a two-node cluster with a policy that mirrors
every queue onto every node. That will "just work" as long as you're using
modern Nameko (with publish-confirms enabled and consumer heartbeats) and,
if using a load-balancer, you've got it set up sensibly.

Not sure I understand your point about service instances consuming
messages multiple times. Do you mean that you don't want any messages to be
duplicated? A general point about AMQP entrypoints in Nameko is that it
gives you "at least once" delivery semantics (as opposed to "at most once")
-- meaning that in the event of an error you may see messages delivered
more than once but you'll never lose one. This stems from the fact that
entrypoints ack messages after the worker completes, and that Rabbit
reclaims any unack'd messages owned by a consumer if that consumer
disconnects.

On Friday, June 16, 2017 at 12:20:10 PM UTC+1, simon harrison wrote:

Hi Everyone

We already have some nameko services in production all using the HTTP
entrypoints. Now (for performance reasons) we're in the process of moving
some of the internal services to communicate over RPC and we need to setup
HA RabbitMQ before we can move forward.

It would be great to see this sort of information in the docs and maybe I
can help with that - after a little help here first!

We do have the fundamental understandings on what's going on here and
what we're pretty sure of is we don't want the recommended mirrored queues
because with our many service instances the messages are going to be
consumed multiple times. We think we need something more like failover
exchanges and queues.

So if someone is able to clearly explain the best practice pattern for HA
RabbitMQ for nameko then I think this would help many other people here
other than us.

Many thanks.

By "sensible" load-balancer setup I mean that the idle disconnection time
is longer than your consumer heartbeat. The heartbeat is what keeps an
otherwise idle connection active in the eyes of the load-balancer, so as
long as your heartbeat is frequent enough the load-balancer will maintain
the connection.

Mirrored queues won't cause messages to be duplicated (it's not actually
creating multiple queues, just a backup). You *will* see messages
redelivered if your consumer loses its connection before ack'ing the
message though, as would happen if a load-balancer yanked what it thought
to be an idle connection. This is because promises at-least-once delivery
-- messages will be requeued and redelivered if the consumer disconnects
before acknowledging it. In Nameko, entrypoints ack only after the worker
method has executed, so a disconnection mid-method and subsequent
reconnection may look like a duplicate request.

···

On Monday, July 3, 2017 at 2:38:24 PM UTC+1, simon harrison wrote:

Hi Matt

Thanks for clarifying these things. Apologies for the slow reply - summer
hols etc. etc.

Reading my post again I was thinking about failover queues and not
exchanges (sorry for the confusion) and from what you describe the mirrored
queues should only be consumed from if the master goes away - the behaviour
we want.

So we were experiencing messages delivered >1 times on a regular basis
with mirrored queues. Maybe this does suggest a config issue? We were on
2.5.1 at the time (now 2.6). I'm not sure about the consumer heartbeats.
But I suspect a load balancing problem. Can you explain what "sensibly"
means, and maybe give some insight into what can go wrong if this is not
done sensibly? Could a bad setup lead to the scenario I described?

Thanks

On Tuesday, 20 June 2017 08:51:57 UTC+1, Matt Yule-Bennett wrote:

I'm pretty sure mirrored queues are a requirement for a HA setup.
"Failover exchanges" aren't a thing -- exchanges logically exist on every
node in the cluster, so it doesn't matter which node you're connected to
when you publish. You can also consume from any queue having connected to
any node, but each queue does live on a particular node. Consumers are
redirected inside the cluster to the node that owns the queue it wants to
consume from. Mirroring queues makes slave queues available on other nodes,
so consumers can connect to those if the master node goes away, which is
what gives you high-availability for consumption. If you *don't* mirror
queues you'll see errors about being unable to connect to the master node
if it goes away.

The simplest HA setup is a two-node cluster with a policy that mirrors
every queue onto every node. That will "just work" as long as you're using
modern Nameko (with publish-confirms enabled and consumer heartbeats) and,
if using a load-balancer, you've got it set up sensibly.

Not sure I understand your point about service instances consuming
messages multiple times. Do you mean that you don't want any messages to be
duplicated? A general point about AMQP entrypoints in Nameko is that it
gives you "at least once" delivery semantics (as opposed to "at most once")
-- meaning that in the event of an error you may see messages delivered
more than once but you'll never lose one. This stems from the fact that
entrypoints ack messages after the worker completes, and that Rabbit
reclaims any unack'd messages owned by a consumer if that consumer
disconnects.

On Friday, June 16, 2017 at 12:20:10 PM UTC+1, simon harrison wrote:

Hi Everyone

We already have some nameko services in production all using the HTTP
entrypoints. Now (for performance reasons) we're in the process of moving
some of the internal services to communicate over RPC and we need to setup
HA RabbitMQ before we can move forward.

It would be great to see this sort of information in the docs and maybe
I can help with that - after a little help here first!

We do have the fundamental understandings on what's going on here and
what we're pretty sure of is we don't want the recommended mirrored queues
because with our many service instances the messages are going to be
consumed multiple times. We think we need something more like failover
exchanges and queues.

So if someone is able to clearly explain the best practice pattern for
HA RabbitMQ for nameko then I think this would help many other people here
other than us.

Many thanks.

Great stuff Matt.

This all makes sense.

Final question: How do we configure consumer heartbeats? I see consumer
heartbeats were added in 2.4.4 but I can't see in the docs how to configure
them. Config var?

Thanks once again

···

On Tuesday, 4 July 2017 08:14:14 UTC+1, Matt Yule-Bennett wrote:

By "sensible" load-balancer setup I mean that the idle disconnection time
is longer than your consumer heartbeat. The heartbeat is what keeps an
otherwise idle connection active in the eyes of the load-balancer, so as
long as your heartbeat is frequent enough the load-balancer will maintain
the connection.

Mirrored queues won't cause messages to be duplicated (it's not actually
creating multiple queues, just a backup). You *will* see messages
redelivered if your consumer loses its connection before ack'ing the
message though, as would happen if a load-balancer yanked what it thought
to be an idle connection. This is because promises at-least-once delivery
-- messages will be requeued and redelivered if the consumer disconnects
before acknowledging it. In Nameko, entrypoints ack only after the worker
method has executed, so a disconnection mid-method and subsequent
reconnection may look like a duplicate request.

On Monday, July 3, 2017 at 2:38:24 PM UTC+1, simon harrison wrote:

Hi Matt

Thanks for clarifying these things. Apologies for the slow reply - summer
hols etc. etc.

Reading my post again I was thinking about failover queues and not
exchanges (sorry for the confusion) and from what you describe the mirrored
queues should only be consumed from if the master goes away - the behaviour
we want.

So we were experiencing messages delivered >1 times on a regular basis
with mirrored queues. Maybe this does suggest a config issue? We were on
2.5.1 at the time (now 2.6). I'm not sure about the consumer heartbeats.
But I suspect a load balancing problem. Can you explain what "sensibly"
means, and maybe give some insight into what can go wrong if this is not
done sensibly? Could a bad setup lead to the scenario I described?

Thanks

On Tuesday, 20 June 2017 08:51:57 UTC+1, Matt Yule-Bennett wrote:

I'm pretty sure mirrored queues are a requirement for a HA setup.
"Failover exchanges" aren't a thing -- exchanges logically exist on every
node in the cluster, so it doesn't matter which node you're connected to
when you publish. You can also consume from any queue having connected to
any node, but each queue does live on a particular node. Consumers are
redirected inside the cluster to the node that owns the queue it wants to
consume from. Mirroring queues makes slave queues available on other nodes,
so consumers can connect to those if the master node goes away, which is
what gives you high-availability for consumption. If you *don't* mirror
queues you'll see errors about being unable to connect to the master node
if it goes away.

The simplest HA setup is a two-node cluster with a policy that mirrors
every queue onto every node. That will "just work" as long as you're using
modern Nameko (with publish-confirms enabled and consumer heartbeats) and,
if using a load-balancer, you've got it set up sensibly.

Not sure I understand your point about service instances consuming
messages multiple times. Do you mean that you don't want any messages to be
duplicated? A general point about AMQP entrypoints in Nameko is that it
gives you "at least once" delivery semantics (as opposed to "at most once")
-- meaning that in the event of an error you may see messages delivered
more than once but you'll never lose one. This stems from the fact that
entrypoints ack messages after the worker completes, and that Rabbit
reclaims any unack'd messages owned by a consumer if that consumer
disconnects.

On Friday, June 16, 2017 at 12:20:10 PM UTC+1, simon harrison wrote:

Hi Everyone

We already have some nameko services in production all using the HTTP
entrypoints. Now (for performance reasons) we're in the process of moving
some of the internal services to communicate over RPC and we need to setup
HA RabbitMQ before we can move forward.

It would be great to see this sort of information in the docs and maybe
I can help with that - after a little help here first!

We do have the fundamental understandings on what's going on here and
what we're pretty sure of is we don't want the recommended mirrored queues
because with our many service instances the messages are going to be
consumed multiple times. We think we need something more like failover
exchanges and queues.

So if someone is able to clearly explain the best practice pattern for
HA RabbitMQ for nameko then I think this would help many other people here
other than us.

Many thanks.

They're enabled by default. Use the DEFAULT_HEARTBEAT config key to change
the frequency (defaults to 60 seconds).

···

On Tuesday, July 4, 2017 at 9:50:08 AM UTC+1, simon harrison wrote:

Great stuff Matt.

This all makes sense.

Final question: How do we configure consumer heartbeats? I see consumer
heartbeats were added in 2.4.4 but I can't see in the docs how to configure
them. Config var?

Thanks once again

On Tuesday, 4 July 2017 08:14:14 UTC+1, Matt Yule-Bennett wrote:

By "sensible" load-balancer setup I mean that the idle disconnection time
is longer than your consumer heartbeat. The heartbeat is what keeps an
otherwise idle connection active in the eyes of the load-balancer, so as
long as your heartbeat is frequent enough the load-balancer will maintain
the connection.

Mirrored queues won't cause messages to be duplicated (it's not actually
creating multiple queues, just a backup). You *will* see messages
redelivered if your consumer loses its connection before ack'ing the
message though, as would happen if a load-balancer yanked what it thought
to be an idle connection. This is because promises at-least-once delivery
-- messages will be requeued and redelivered if the consumer disconnects
before acknowledging it. In Nameko, entrypoints ack only after the worker
method has executed, so a disconnection mid-method and subsequent
reconnection may look like a duplicate request.

On Monday, July 3, 2017 at 2:38:24 PM UTC+1, simon harrison wrote:

Hi Matt

Thanks for clarifying these things. Apologies for the slow reply -
summer hols etc. etc.

Reading my post again I was thinking about failover queues and not
exchanges (sorry for the confusion) and from what you describe the mirrored
queues should only be consumed from if the master goes away - the behaviour
we want.

So we were experiencing messages delivered >1 times on a regular basis
with mirrored queues. Maybe this does suggest a config issue? We were on
2.5.1 at the time (now 2.6). I'm not sure about the consumer heartbeats.
But I suspect a load balancing problem. Can you explain what "sensibly"
means, and maybe give some insight into what can go wrong if this is not
done sensibly? Could a bad setup lead to the scenario I described?

Thanks

On Tuesday, 20 June 2017 08:51:57 UTC+1, Matt Yule-Bennett wrote:

I'm pretty sure mirrored queues are a requirement for a HA setup.
"Failover exchanges" aren't a thing -- exchanges logically exist on every
node in the cluster, so it doesn't matter which node you're connected to
when you publish. You can also consume from any queue having connected to
any node, but each queue does live on a particular node. Consumers are
redirected inside the cluster to the node that owns the queue it wants to
consume from. Mirroring queues makes slave queues available on other nodes,
so consumers can connect to those if the master node goes away, which is
what gives you high-availability for consumption. If you *don't*
mirror queues you'll see errors about being unable to connect to the master
node if it goes away.

The simplest HA setup is a two-node cluster with a policy that mirrors
every queue onto every node. That will "just work" as long as you're using
modern Nameko (with publish-confirms enabled and consumer heartbeats) and,
if using a load-balancer, you've got it set up sensibly.

Not sure I understand your point about service instances consuming
messages multiple times. Do you mean that you don't want any messages to be
duplicated? A general point about AMQP entrypoints in Nameko is that it
gives you "at least once" delivery semantics (as opposed to "at most once")
-- meaning that in the event of an error you may see messages delivered
more than once but you'll never lose one. This stems from the fact that
entrypoints ack messages after the worker completes, and that Rabbit
reclaims any unack'd messages owned by a consumer if that consumer
disconnects.

On Friday, June 16, 2017 at 12:20:10 PM UTC+1, simon harrison wrote:

Hi Everyone

We already have some nameko services in production all using the HTTP
entrypoints. Now (for performance reasons) we're in the process of moving
some of the internal services to communicate over RPC and we need to setup
HA RabbitMQ before we can move forward.

It would be great to see this sort of information in the docs and
maybe I can help with that - after a little help here first!

We do have the fundamental understandings on what's going on here and
what we're pretty sure of is we don't want the recommended mirrored queues
because with our many service instances the messages are going to be
consumed multiple times. We think we need something more like failover
exchanges and queues.

So if someone is able to clearly explain the best practice pattern for
HA RabbitMQ for nameko then I think this would help many other people here
other than us.

Many thanks.

I believe config key is HEARTBEAT

Jakub Borys | +44 77 1822 5201 | @JakubBorys
<https://twitter.com/JakubBorys&gt;

···

On 4 July 2017 at 12:24, Matt Yule-Bennett <bennett.matthew@gmail.com> wrote:

They're enabled by default. Use the DEFAULT_HEARTBEAT config key to change
the frequency (defaults to 60 seconds).

On Tuesday, July 4, 2017 at 9:50:08 AM UTC+1, simon harrison wrote:

Great stuff Matt.

This all makes sense.

Final question: How do we configure consumer heartbeats? I see consumer
heartbeats were added in 2.4.4 but I can't see in the docs how to configure
them. Config var?

Thanks once again

On Tuesday, 4 July 2017 08:14:14 UTC+1, Matt Yule-Bennett wrote:

By "sensible" load-balancer setup I mean that the idle disconnection
time is longer than your consumer heartbeat. The heartbeat is what keeps an
otherwise idle connection active in the eyes of the load-balancer, so as
long as your heartbeat is frequent enough the load-balancer will maintain
the connection.

Mirrored queues won't cause messages to be duplicated (it's not actually
creating multiple queues, just a backup). You *will* see messages
redelivered if your consumer loses its connection before ack'ing the
message though, as would happen if a load-balancer yanked what it thought
to be an idle connection. This is because promises at-least-once delivery
-- messages will be requeued and redelivered if the consumer disconnects
before acknowledging it. In Nameko, entrypoints ack only after the worker
method has executed, so a disconnection mid-method and subsequent
reconnection may look like a duplicate request.

On Monday, July 3, 2017 at 2:38:24 PM UTC+1, simon harrison wrote:

Hi Matt

Thanks for clarifying these things. Apologies for the slow reply -
summer hols etc. etc.

Reading my post again I was thinking about failover queues and not
exchanges (sorry for the confusion) and from what you describe the mirrored
queues should only be consumed from if the master goes away - the behaviour
we want.

So we were experiencing messages delivered >1 times on a regular basis
with mirrored queues. Maybe this does suggest a config issue? We were on
2.5.1 at the time (now 2.6). I'm not sure about the consumer heartbeats.
But I suspect a load balancing problem. Can you explain what "sensibly"
means, and maybe give some insight into what can go wrong if this is not
done sensibly? Could a bad setup lead to the scenario I described?

Thanks

On Tuesday, 20 June 2017 08:51:57 UTC+1, Matt Yule-Bennett wrote:

I'm pretty sure mirrored queues are a requirement for a HA setup.
"Failover exchanges" aren't a thing -- exchanges logically exist on every
node in the cluster, so it doesn't matter which node you're connected to
when you publish. You can also consume from any queue having connected to
any node, but each queue does live on a particular node. Consumers are
redirected inside the cluster to the node that owns the queue it wants to
consume from. Mirroring queues makes slave queues available on other nodes,
so consumers can connect to those if the master node goes away, which is
what gives you high-availability for consumption. If you *don't*
mirror queues you'll see errors about being unable to connect to the master
node if it goes away.

The simplest HA setup is a two-node cluster with a policy that mirrors
every queue onto every node. That will "just work" as long as you're using
modern Nameko (with publish-confirms enabled and consumer heartbeats) and,
if using a load-balancer, you've got it set up sensibly.

Not sure I understand your point about service instances consuming
messages multiple times. Do you mean that you don't want any messages to be
duplicated? A general point about AMQP entrypoints in Nameko is that it
gives you "at least once" delivery semantics (as opposed to "at most once")
-- meaning that in the event of an error you may see messages delivered
more than once but you'll never lose one. This stems from the fact that
entrypoints ack messages after the worker completes, and that Rabbit
reclaims any unack'd messages owned by a consumer if that consumer
disconnects.

On Friday, June 16, 2017 at 12:20:10 PM UTC+1, simon harrison wrote:

Hi Everyone

We already have some nameko services in production all using the HTTP
entrypoints. Now (for performance reasons) we're in the process of moving
some of the internal services to communicate over RPC and we need to setup
HA RabbitMQ before we can move forward.

It would be great to see this sort of information in the docs and
maybe I can help with that - after a little help here first!

We do have the fundamental understandings on what's going on here and
what we're pretty sure of is we don't want the recommended mirrored queues
because with our many service instances the messages are going to be
consumed multiple times. We think we need something more like failover
exchanges and queues.

So if someone is able to clearly explain the best practice pattern
for HA RabbitMQ for nameko then I think this would help many other people
here other than us.

Many thanks.

--

You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/
msgid/nameko-dev/d85f9ef9-4a75-45c3-bc92-978dfe7cfbaa%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/d85f9ef9-4a75-45c3-bc92-978dfe7cfbaa%40googlegroups.com?utm_medium=email&utm_source=footer&gt;
.

For more options, visit https://groups.google.com/d/optout\.

Thanks all.

We'll let you know how we get on!

···

On Tuesday, 4 July 2017 12:36:42 UTC+1, Jakub Borys wrote:

I believe config key is HEARTBEAT

https://github.com/nameko/nameko/blob/master/nameko/constants.py#L5

Jakub Borys | +44 77 1822 5201 | @JakubBorys
<https://twitter.com/JakubBorys&gt;

On 4 July 2017 at 12:24, Matt Yule-Bennett <bennett...@gmail.com > <javascript:>> wrote:

They're enabled by default. Use the DEFAULT_HEARTBEAT config key to
change the frequency (defaults to 60 seconds).

On Tuesday, July 4, 2017 at 9:50:08 AM UTC+1, simon harrison wrote:

Great stuff Matt.

This all makes sense.

Final question: How do we configure consumer heartbeats? I see consumer
heartbeats were added in 2.4.4 but I can't see in the docs how to configure
them. Config var?

Thanks once again

On Tuesday, 4 July 2017 08:14:14 UTC+1, Matt Yule-Bennett wrote:

By "sensible" load-balancer setup I mean that the idle disconnection
time is longer than your consumer heartbeat. The heartbeat is what keeps an
otherwise idle connection active in the eyes of the load-balancer, so as
long as your heartbeat is frequent enough the load-balancer will maintain
the connection.

Mirrored queues won't cause messages to be duplicated (it's not
actually creating multiple queues, just a backup). You *will* see
messages redelivered if your consumer loses its connection before ack'ing
the message though, as would happen if a load-balancer yanked what it
thought to be an idle connection. This is because promises at-least-once
delivery -- messages will be requeued and redelivered if the consumer
disconnects before acknowledging it. In Nameko, entrypoints ack only after
the worker method has executed, so a disconnection mid-method and
subsequent reconnection may look like a duplicate request.

On Monday, July 3, 2017 at 2:38:24 PM UTC+1, simon harrison wrote:

Hi Matt

Thanks for clarifying these things. Apologies for the slow reply -
summer hols etc. etc.

Reading my post again I was thinking about failover queues and not
exchanges (sorry for the confusion) and from what you describe the mirrored
queues should only be consumed from if the master goes away - the behaviour
we want.

So we were experiencing messages delivered >1 times on a regular basis
with mirrored queues. Maybe this does suggest a config issue? We were on
2.5.1 at the time (now 2.6). I'm not sure about the consumer heartbeats.
But I suspect a load balancing problem. Can you explain what "sensibly"
means, and maybe give some insight into what can go wrong if this is not
done sensibly? Could a bad setup lead to the scenario I described?

Thanks

On Tuesday, 20 June 2017 08:51:57 UTC+1, Matt Yule-Bennett wrote:

I'm pretty sure mirrored queues are a requirement for a HA setup.
"Failover exchanges" aren't a thing -- exchanges logically exist on every
node in the cluster, so it doesn't matter which node you're connected to
when you publish. You can also consume from any queue having connected to
any node, but each queue does live on a particular node. Consumers are
redirected inside the cluster to the node that owns the queue it wants to
consume from. Mirroring queues makes slave queues available on other nodes,
so consumers can connect to those if the master node goes away, which is
what gives you high-availability for consumption. If you *don't*
mirror queues you'll see errors about being unable to connect to the master
node if it goes away.

The simplest HA setup is a two-node cluster with a policy that
mirrors every queue onto every node. That will "just work" as long as
you're using modern Nameko (with publish-confirms enabled and consumer
heartbeats) and, if using a load-balancer, you've got it set up sensibly.

Not sure I understand your point about service instances consuming
messages multiple times. Do you mean that you don't want any messages to be
duplicated? A general point about AMQP entrypoints in Nameko is that it
gives you "at least once" delivery semantics (as opposed to "at most once")
-- meaning that in the event of an error you may see messages delivered
more than once but you'll never lose one. This stems from the fact that
entrypoints ack messages after the worker completes, and that Rabbit
reclaims any unack'd messages owned by a consumer if that consumer
disconnects.

On Friday, June 16, 2017 at 12:20:10 PM UTC+1, simon harrison wrote:

Hi Everyone

We already have some nameko services in production all using the
HTTP entrypoints. Now (for performance reasons) we're in the process of
moving some of the internal services to communicate over RPC and we need to
setup HA RabbitMQ before we can move forward.

It would be great to see this sort of information in the docs and
maybe I can help with that - after a little help here first!

We do have the fundamental understandings on what's going on here
and what we're pretty sure of is we don't want the recommended mirrored
queues because with our many service instances the messages are going to be
consumed multiple times. We think we need something more like failover
exchanges and queues.

So if someone is able to clearly explain the best practice pattern
for HA RabbitMQ for nameko then I think this would help many other people
here other than us.

Many thanks.

--

You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+...@googlegroups.com <javascript:>.
To post to this group, send email to namek...@googlegroups.com
<javascript:>.
To view this discussion on the web, visit
https://groups.google.com/d/msgid/nameko-dev/d85f9ef9-4a75-45c3-bc92-978dfe7cfbaa%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/d85f9ef9-4a75-45c3-bc92-978dfe7cfbaa%40googlegroups.com?utm_medium=email&utm_source=footer&gt;
.

For more options, visit https://groups.google.com/d/optout\.