Batch consumer for RPC enpoint


Hello, is there a way to consume many objects by a single worker. For example, a single producer is producing messages, the consumer(RPC endpoint) sees those messages and once in 5 seconds consumes at most 100 messages and processes them somehow, then takes next 100…


This is determined by the prefetch count of the consuming entrypoint. It specifies the number of messages that a consumer may be “processing” at a time.

If the prefetch count is one, a consumer will take exactly one message at a time and not consume any more from the queue until the message is acknowledged (which, in Nameko’s entrypoints, happens after the service method has returned). If the count is higher, the consumer grabs as many messages as it’s allowed and then processes the batch.

As an example, if a single publisher produced 100 messages, which were consumed by two consumers with a prefetch count of 100 each, the first consumer would take all 100 messages if they were in the queue when it connected.

There is no regular interval on which new messages are consumed though. A consumer with a prefetch of 100 will top-up its batch of unacknowledged messages whenever it gets the chance. There isn’t really a performance penalty for this.

In Nameko 2.x the prefetch count is equal to the max_workers of the service, which defaults to 10. In the 3.x prerelease branch (installable from PyPI with pip install --pre nameko) you can explicitly set the prefetch count per entrypoint by passing prefetch_count=n as a keyword argument when declaring it.


Hi @mattbennett, thanks for the response. I am already trying to use the 3.x. And if we take the last example, when you sent 100 messages and prefetch count is == 100. Does it mean that method in the worker will be executed 100 times, but there will be only one instance of the worker so that I can cache those messages and send a single batch when I consume the last one?


Ah, I understand your original question better now. No, it creates 100 workers and 100 results.

If you want to be returning results in batches, you’ll need to use something other than the @rpc decorator, or wrapping the RPC calls in something else that batches the results.

The messaging paradigm you want is multi-request, single-response RPC. Nameko’s AMQP RPC only does single-request single-response RPC. You could implement this yourself using AMQP by creating some custom extensions.

You can do multi-request, single-response RPC with gRPC (referred to there as STREAM_UNARY). I am just about to release the first version of nameko-grpc if you care to try it :smiley:


Thanks! I understand that Nameko’s purpose is a bit different, but wanted to ask if I missed something in the documentation. Thanks once more.