@event_handler('trainproxy', 'timer_alert', handler_type=SINGLETON)

Hi,

I have a question howe nameko works.

We are create a nameko service as a proxy to another service that is not
using eventing (eg: we need to poll it to check status)

Since we are using polling, we initially using @timer constructor to do
this polling and firing event on this service that we are trying to
eventized.

The trouble comes when our system is under stress, timer method is still
running and another timer get triggered.

In order to avoid using DB locks, we continue to use a timer method, but
use it to dispatch an event that will be consumer by another event_handler

In the event handler, we do this construct:

*@event_handler('trainproxy', 'timer_alert', handler_type=SINGLETON)*

We hope this means that only one and only one instance is handing a event.
This will be still true if we have multiple instances of the service. Is
this correct?

I believe you are correct, but if the service which has the timer runs
multiple times you will still have multiple executions..

Yesterday,

I ran a performance/ stress on this piece of code... We realize that when
the system is loaded... we can get 2 copies of running code from the timer
at the same time. We tot we fix this using this 'smart' way where only 1
event can be consume at a time... but it turns out that this is not true.
We saw that the events are consume out of order and at the same time too.
We are scratching our heads now... Out of order is a concern but for this
is not bad... but 2 events being consume even after SINGLETON was used...
is weird...

Not sure if somebody can give some advice. It still could be a bug on our
side... but we double check... all our persistence and logic seems right.
Appreciate any tips...

···

On Thursday, July 26, 2018 at 3:50:35 PM UTC-7, coddericko wrote:

Hi,

I have a question howe nameko works.

We are create a nameko service as a proxy to another service that is not
using eventing (eg: we need to poll it to check status)

Since we are using polling, we initially using @timer constructor to do
this polling and firing event on this service that we are trying to
eventized.

The trouble comes when our system is under stress, timer method is still
running and another timer get triggered.

In order to avoid using DB locks, we continue to use a timer method, but
use it to dispatch an event that will be consumer by another event_handler

In the event handler, we do this construct:

*@event_handler('trainproxy', 'timer_alert', handler_type=SINGLETON)*

We hope this means that only one and only one instance is handing a event.
This will be still true if we have multiple instances of the service. Is
this correct?

My idea is this...

*@timer(interval=os.getenv('TRAIN_TIMER', DEFAULT_TIME_INTERVAL))*
def _ping(self):

This will send a message in a queue periodically... so even if my
event_handler method slows to a crawl.. only 1 method can be running at the
same time even with multiple instance of the service with:

*@event_handler('trainproxy', 'timer_alert', handler_type=SINGLETON)*

So that is why i hope we are using SINGLETON as expected. Thanks for your
feedback !

···

On Thursday, July 26, 2018 at 11:36:58 PM UTC-7, KBNi wrote:

I believe you are correct, but if the service which has the timer runs
multiple times you will still have multiple executions..

The "singleton" handler type doesn't mean that only one worker can execute
at a time. If you want this behaviour, you must either set max_workers=1
for the whole service, or limit the prefetch_count of the event handler's
consumer.

In the current version of Nameko it's not straightforward to set the
prefetch_count.
See Redirecting to Google Groups
for details. Overriding the prefetch_count will get much easier
once https://github.com/nameko/nameko/pull/542 lands.

FYI what the "singleton" event hander actually means is that exactly one
service instance across an entire cluster will receive the event,
irrespective of the name of the service. Contrast this to "broadcast" (all
instances receive the event) and "service pool" (the default, where exactly
one instance of each uniquely named service receives the event). The
"singleton" handler type is quite unintuitive and is likely to be removed
in a future version.

···

On Friday, July 27, 2018 at 6:01:54 PM UTC+1, foode...@gmail.com wrote:

Yesterday,

I ran a performance/ stress on this piece of code... We realize that when
the system is loaded... we can get 2 copies of running code from the timer
at the same time. We tot we fix this using this 'smart' way where only 1
event can be consume at a time... but it turns out that this is not true.
We saw that the events are consume out of order and at the same time too.
We are scratching our heads now... Out of order is a concern but for this
is not bad... but 2 events being consume even after SINGLETON was used...
is weird...

Not sure if somebody can give some advice. It still could be a bug on our
side... but we double check... all our persistence and logic seems right.
Appreciate any tips...

On Thursday, July 26, 2018 at 3:50:35 PM UTC-7, coddericko wrote:

Hi,

I have a question howe nameko works.

We are create a nameko service as a proxy to another service that is not
using eventing (eg: we need to poll it to check status)

Since we are using polling, we initially using @timer constructor to do
this polling and firing event on this service that we are trying to
eventized.

The trouble comes when our system is under stress, timer method is still
running and another timer get triggered.

In order to avoid using DB locks, we continue to use a timer method, but
use it to dispatch an event that will be consumer by another event_handler

In the event handler, we do this construct:

*@event_handler('trainproxy', 'timer_alert', handler_type=SINGLETON)*

We hope this means that only one and only one instance is handing a
event. This will be still true if we have multiple instances of the
service. Is this correct?

Thanks…Our problem seems to be solved when we configure max_worker and parent_calls_tracked to set to 1. All our problem seems to be disappeared overnight

However, this raise a question in my head, I know this max_worker has been answered before. But to be specific…

If i start my nameko service like this

nameko run --config config.yaml service1service2 service3 service4 service5

and my config.yaml has:

max_workers: 1
parent_calls_tracked: 1

Does it means that in that nameko process, it will only have 1 worker thread switching between all 5 services… OR, 1 worker per service ?

Will appreciate any advice and where the nameko states this ?

Currently, my guess is former because i see stability of my container in terms of memory usage even when i fire it with high load. Previously, CF kills my container when max_worker was not set.

One worker per service. The only difference between this and starting each service individually is that here they share the same Python process. The “worker pool” is still established for each service with its size set from the configuration file.

I should also say that the service-level worker pool is a bit strange and probably unhelpful. It made more sense when Nameko was just an RPC library (a long time ago), but now that you can use any protocol you like to fire your entrypoints, per-entrypoint concurrency limits make much more sense. I expect this feature to come along at some point, perhaps even in the 3.x release.

Thanks Matt for the clarification.

1 worker pool per service is great. Actually, i just need one of the service to have a pool of 1 worker. The rest, can be default. Since i am running my service in CF, we are ask not to have multiple process running as then our health checker might have a problem when one of the process dies (which is not what the health-checking is looking for)

BTW, loves what nameko is doing… it really helps with microservice development especially when we need async and guarantee delivery like rabbit.

One final question, do you think nameko can be ‘refactor’ to use different messaging beside rabbit ? How easy or hard do ?

It’s pretty easy to do this with a new set of “extensions”. The @event_handler entrypoint is an example of a built-in extension, and they’re named this way because Nameko encourages adding new ones. The stuff that comes in the box is really just to get you started :slight_smile:

Check out the writing extensions page of the docs. There is a simplistic example there of using SNS/SQS messaging.