Rabbit Connection Behaviour

Hi Devs

I'd like to confirm the behaviour of service connections to RabbitMQ when
we have services A and B, where A only has HTTP entrypoints but has an
RpcProxy to B, and B obviously has RPC entrypoints.

When we start up each service we see connections to RabbitMQ and the
service queue for B created. We assume each service now has a single TCP
connection to rabbit.

*When an http request comes into service A and a worker is spawned to
handle it...*
- does the RpcProxy create a new rabbit connection for the worker each time?
- or does that worker get a Channel within the initial TCP connection?
- or is no connection made unless the RpcProxy is used?
- or something else?

*During handling of this request service A then needs to make a call to
service B.*
- does each worker create and manage its own reply queue as it makes the
call?
- or is there a single reply queue for any worker to use?
- or something else?

Thanks

Hi Simon,

service instances with rpcproxies create a single queue to use for all
replies sent to that instance during its lifetime.

connections to rabbit are pooled by kombu, but for listening, there is
currently a single shared extension (the `QueueConsumer`) which listens to
reply (or main service queues) if a service is using an rpcproxy or rpc
entrypoint. it starts listening on startup

Best,
D

ยทยทยท

On Friday, 31 March 2017 12:31:37 UTC+1, simon harrison wrote:

Hi Devs

I'd like to confirm the behaviour of service connections to RabbitMQ when
we have services A and B, where A only has HTTP entrypoints but has an
RpcProxy to B, and B obviously has RPC entrypoints.

When we start up each service we see connections to RabbitMQ and the
service queue for B created. We assume each service now has a single TCP
connection to rabbit.

*When an http request comes into service A and a worker is spawned to
handle it...*
- does the RpcProxy create a new rabbit connection for the worker each
time?
- or does that worker get a Channel within the initial TCP connection?
- or is no connection made unless the RpcProxy is used?
- or something else?

*During handling of this request service A then needs to make a call to
service B.*
- does each worker create and manage its own reply queue as it makes the
call?
- or is there a single reply queue for any worker to use?
- or something else?

Thanks

1 Like

Thanks David

It would help me to clarify some language i think.

My understanding is that a "container" spawns workers which instantiate a
service and then use it to execute the request. A new worker is spawned for
each request. Given your answer, that would mean to me that each new
request gets its own reply queue. But what I actually observe is a single
reply queue for the life of the container. Can you help me out here?

Also, In my example, the rabbit logs show that service A appears to
establish a new connection to rabbit on each incoming request - which
surprises me. If service A handles 99% HTTP calls, and makes the odd RPC
call to B, i don't want to see overhead connecting to rabbit on each http
call. note i do say "appears", and I may be reading this wrong. In your
answer you explain listening, but service A here isn't listening, it
sometimes calls and then listens. So is a new Producer created with each
service instance with a new connection regardless of a call being made?
looking
here https://github.com/nameko/nameko/blob/master/nameko/rpc.py#L501 maybe
this comes from the kombu pool you described and there is no extra overhead?

I'm asking these questions because our rabbit logs show connections
constantly opening and closing as we run tests over these services.

Thanks again.

Simon

ยทยทยท

On Friday, 31 March 2017 13:25:58 UTC+1, David Szotten wrote:

Hi Simon,

service instances with rpcproxies create a single queue to use for all
replies sent to that instance during its lifetime.

connections to rabbit are pooled by kombu, but for listening, there is
currently a single shared extension (the `QueueConsumer`) which listens to
reply (or main service queues) if a service is using an rpcproxy or rpc
entrypoint. it starts listening on startup

Best,
D

On Friday, 31 March 2017 12:31:37 UTC+1, simon harrison wrote:

Hi Devs

I'd like to confirm the behaviour of service connections to RabbitMQ when
we have services A and B, where A only has HTTP entrypoints but has an
RpcProxy to B, and B obviously has RPC entrypoints.

When we start up each service we see connections to RabbitMQ and the
service queue for B created. We assume each service now has a single TCP
connection to rabbit.

*When an http request comes into service A and a worker is spawned to
handle it...*
- does the RpcProxy create a new rabbit connection for the worker each
time?
- or does that worker get a Channel within the initial TCP connection?
- or is no connection made unless the RpcProxy is used?
- or something else?

*During handling of this request service A then needs to make a call to
service B.*
- does each worker create and manage its own reply queue as it makes the
call?
- or is there a single reply queue for any worker to use?
- or something else?

Thanks

Thanks David

It would help me to clarify some language i think.

My understanding is that a "container" spawns workers which instantiate a
service and then use it to execute the request. A new worker is spawned for
each request. Given your answer, that would mean to me that each new
request gets its own reply queue. But what I actually observe is a single
reply queue for the life of the container. Can you help me out here?

It's correct that the container spawns workers to handle each request. The
container also manages instances of each Extension (including Entrypoints
and DependencyProviders) that the service declares. These live for the
lifetime of the container. The reply queue for the RpcProxy is managed by
the DependencyProvider instance, which is why you see it living for the
lifetime of the container.

Also, In my example, the rabbit logs show that service A appears to
establish a new connection to rabbit on each incoming request - which
surprises me. If service A handles 99% HTTP calls, and makes the odd RPC
call to B, i don't want to see overhead connecting to rabbit on each http
call. note i do say "appears", and I may be reading this wrong. In your
answer you explain listening, but service A here isn't listening, it
sometimes calls and then listens. So is a new Producer created with each
service instance with a new connection regardless of a call being made?
looking here
https://github.com/nameko/nameko/blob/master/nameko/rpc.py#L501
<https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fnameko%2Fnameko%2Fblob%2Fmaster%2Fnameko%2Frpc.py%23L501&sa=D&sntz=1&usg=AFQjCNFhWxdLW1KHOlsp0C8-TkxuL67nBQ>
maybe this comes from the kombu pool you described and there is no extra
overhead?

Service A will create a new connection to consume RPC replies on startup.
Thereafter, every time the RpcProxy to B is *used*, a new message is
published. Publishers draw their connections from the kombu pool, so I
would not expect subsequent uses of the RpcProxy to be creating new
connections.

I'm asking these questions because our rabbit logs show connections
constantly opening and closing as we run tests over these services.

Don't forget that tests are isolated. If you're using the
`container_factory` fixture, every test will create an entirely new
container, which will establish new connections to Rabbit.

ยทยทยท

On Friday, March 31, 2017 at 1:55:55 PM UTC+1, simon harrison wrote:

Thanks again.

Simon

On Friday, 31 March 2017 13:25:58 UTC+1, David Szotten wrote:

Hi Simon,

service instances with rpcproxies create a single queue to use for all
replies sent to that instance during its lifetime.

connections to rabbit are pooled by kombu, but for listening, there is
currently a single shared extension (the `QueueConsumer`) which listens to
reply (or main service queues) if a service is using an rpcproxy or rpc
entrypoint. it starts listening on startup

Best,
D

On Friday, 31 March 2017 12:31:37 UTC+1, simon harrison wrote:

Hi Devs

I'd like to confirm the behaviour of service connections to RabbitMQ
when we have services A and B, where A only has HTTP entrypoints but has an
RpcProxy to B, and B obviously has RPC entrypoints.

When we start up each service we see connections to RabbitMQ and the
service queue for B created. We assume each service now has a single TCP
connection to rabbit.

*When an http request comes into service A and a worker is spawned to
handle it...*
- does the RpcProxy create a new rabbit connection for the worker each
time?
- or does that worker get a Channel within the initial TCP connection?
- or is no connection made unless the RpcProxy is used?
- or something else?

*During handling of this request service A then needs to make a call to
service B.*
- does each worker create and manage its own reply queue as it makes the
call?
- or is there a single reply queue for any worker to use?
- or something else?

Thanks

Thanks chaps.

I think I'm up to speed now.

Appreciated.

ยทยทยท

On Friday, 31 March 2017 14:20:11 UTC+1, Matt Yule-Bennett wrote:

On Friday, March 31, 2017 at 1:55:55 PM UTC+1, simon harrison wrote:

Thanks David

It would help me to clarify some language i think.

My understanding is that a "container" spawns workers which instantiate a
service and then use it to execute the request. A new worker is spawned for
each request. Given your answer, that would mean to me that each new
request gets its own reply queue. But what I actually observe is a single
reply queue for the life of the container. Can you help me out here?

It's correct that the container spawns workers to handle each request. The
container also manages instances of each Extension (including Entrypoints
and DependencyProviders) that the service declares. These live for the
lifetime of the container. The reply queue for the RpcProxy is managed by
the DependencyProvider instance, which is why you see it living for the
lifetime of the container.

Also, In my example, the rabbit logs show that service A appears to
establish a new connection to rabbit on each incoming request - which
surprises me. If service A handles 99% HTTP calls, and makes the odd RPC
call to B, i don't want to see overhead connecting to rabbit on each http
call. note i do say "appears", and I may be reading this wrong. In your
answer you explain listening, but service A here isn't listening, it
sometimes calls and then listens. So is a new Producer created with each
service instance with a new connection regardless of a call being made?
looking here
https://github.com/nameko/nameko/blob/master/nameko/rpc.py#L501
<https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fnameko%2Fnameko%2Fblob%2Fmaster%2Fnameko%2Frpc.py%23L501&sa=D&sntz=1&usg=AFQjCNFhWxdLW1KHOlsp0C8-TkxuL67nBQ>
maybe this comes from the kombu pool you described and there is no extra
overhead?

Service A will create a new connection to consume RPC replies on startup.
Thereafter, every time the RpcProxy to B is *used*, a new message is
published. Publishers draw their connections from the kombu pool, so I
would not expect subsequent uses of the RpcProxy to be creating new
connections.

I'm asking these questions because our rabbit logs show connections
constantly opening and closing as we run tests over these services.

Don't forget that tests are isolated. If you're using the
`container_factory` fixture, every test will create an entirely new
container, which will establish new connections to Rabbit.

Thanks again.

Simon

On Friday, 31 March 2017 13:25:58 UTC+1, David Szotten wrote:

Hi Simon,

service instances with rpcproxies create a single queue to use for all
replies sent to that instance during its lifetime.

connections to rabbit are pooled by kombu, but for listening, there is
currently a single shared extension (the `QueueConsumer`) which listens to
reply (or main service queues) if a service is using an rpcproxy or rpc
entrypoint. it starts listening on startup

Best,
D

On Friday, 31 March 2017 12:31:37 UTC+1, simon harrison wrote:

Hi Devs

I'd like to confirm the behaviour of service connections to RabbitMQ
when we have services A and B, where A only has HTTP entrypoints but has an
RpcProxy to B, and B obviously has RPC entrypoints.

When we start up each service we see connections to RabbitMQ and the
service queue for B created. We assume each service now has a single TCP
connection to rabbit.

*When an http request comes into service A and a worker is spawned to
handle it...*
- does the RpcProxy create a new rabbit connection for the worker each
time?
- or does that worker get a Channel within the initial TCP connection?
- or is no connection made unless the RpcProxy is used?
- or something else?

*During handling of this request service A then needs to make a call to
service B.*
- does each worker create and manage its own reply queue as it makes
the call?
- or is there a single reply queue for any worker to use?
- or something else?

Thanks