Send RpcReply object to another service

If I had a possibly long rpc call that I am calling async

long = self.internal_rpc.func.call_async(args)

and I log the results of this to a database with another microservice

self.logging_rpc.log(long.result())

It it common to pass the rpcreply to another service and wait in that
service? Like:

long = self.internal_rpc.func.call_async(args)
self.logging_rpc.log(long) #NOTICE this service is now required to handle
waiting logic
return ...

Is this common / possible?

Hi,

Sorry for the delayed reply.

This isn't possible, at least using default serialization settings, because
the RpcReply object is not JSON serializable.

Is the desire to allow the worker that initiated the async RPC call to
return before the reply is ready?

···

On Tuesday, August 22, 2017 at 6:48:29 PM UTC+1, tcinn...@gmail.com wrote:

If I had a possibly long rpc call that I am calling async

long = self.internal_rpc.func.call_async(args)

and I log the results of this to a database with another microservice

self.logging_rpc.log(long.result())

It it common to pass the rpcreply to another service and wait in that
service? Like:

long = self.internal_rpc.func.call_async(args)
self.logging_rpc.log(long) #NOTICE this service is now required to handle
waiting logic
return ...

Is this common / possible?

Hi,

The pseudo code you included (sending a `RpcReply` to another service) is
not possible. The RpcReply contains metadata to get replies, but only
within the requesting service (each service has its own reply queue where
any called service sends replies, so in your example, the logging service
cannot read replies sent to a different service.)

However, there are other patterns you can use to get a similar effect.
Depending on your setup you could for example change the setup so that
instead of the service doing the work "replying" with the result, it sends
it to the logging service, i.e.

class Internal
    def do_work(...)
        result = work()
        self.logging_rpc.report_result(result)
        return None
Best,
David

···

On Tuesday, 22 August 2017 18:48:29 UTC+1, tcinn...@gmail.com wrote:

If I had a possibly long rpc call that I am calling async

long = self.internal_rpc.func.call_async(args)

and I log the results of this to a database with another microservice

self.logging_rpc.log(long.result())

It it common to pass the rpcreply to another service and wait in that
service? Like:

long = self.internal_rpc.func.call_async(args)
self.logging_rpc.log(long) #NOTICE this service is now required to handle
waiting logic
return ...

Is this common / possible?

Hi,

@davisszottens solution actually works quite well, but it requires all of
the calls that want to do work to handle the call to logging themselves.

I have a related but no quite identical problem with result persistance.
In my case I have a chain of rpc calls where the first is initiated using a
ClusterRpcProxy context manager. I'd like to restart the process that
creates that proxy and somehow reestablish the connection so that I can get
the result back in the restarted process. Unfortunately I can't pickle the
result object either (there seems to be an eventlet object in the way of
this). Anyway I'd also be interested in any information or tricks anyone
has to "persist" job calls.

Best,
Davis

···

Am Freitag, 25. August 2017 17:06:14 UTC+2 schrieb David Szotten:

Hi,

The pseudo code you included (sending a `RpcReply` to another service) is
not possible. The RpcReply contains metadata to get replies, but only
within the requesting service (each service has its own reply queue where
any called service sends replies, so in your example, the logging service
cannot read replies sent to a different service.)

However, there are other patterns you can use to get a similar effect.
Depending on your setup you could for example change the setup so that
instead of the service doing the work "replying" with the result, it sends
it to the logging service, i.e.

class Internal
    def do_work(...)
        result = work()
        self.logging_rpc.report_result(result)
        return None
Best,
David

On Tuesday, 22 August 2017 18:48:29 UTC+1, tcinn...@gmail.com wrote:

If I had a possibly long rpc call that I am calling async

long = self.internal_rpc.func.call_async(args)

and I log the results of this to a database with another microservice

self.logging_rpc.log(long.result())

It it common to pass the rpcreply to another service and wait in that
service? Like:

long = self.internal_rpc.func.call_async(args)
self.logging_rpc.log(long) #NOTICE this service is now required to handle
waiting logic
return ...

Is this common / possible?

So the rpc code is specifically built for request-response patterns. it's
entirely possible to use other patterns, you just need the right tool for
the job. it depends on context, but another possible pattern could be to
set up a dedicated, (persistent) "result" queue and have clients post
results there instead of as replies

you could build yourself a custom extension that packages this up nicer (we
can help guide you if you decide to go down that route) but as a hacky
example:

class Caller
    name = 'caller'
    worker_rpc = RpcProxy('worker')
    
    @rpc
    def results(self, data):
        handle_result(data)

    def submit(self):
         # code that requests work
         self.worker_rpc.do_work.call_async(params, reply_to=
'caller.results')

class Worker:
    name = 'worker'

    @rpc
    def do_work(self, params, reply_to):
        result = do_work(params)
        with ClusterRpcProxy() as cluster:
            service_name, method_name = reply_to.split('.')
            service = getattr(cluster, service_name)
            method = getattr(service, method_name)
            method.call_async(result)

This way, results are sent to an rpc (or some other persistent) queue, not
a temporary rpc-reply queue, so will survive e.g. services restarts

Best,
David

···

On Thursday, 9 November 2017 15:24:21 UTC, Davis Kirkendall wrote:

Hi,

@davisszottens solution actually works quite well, but it requires all of
the calls that want to do work to handle the call to logging themselves.

I have a related but no quite identical problem with result persistance.
In my case I have a chain of rpc calls where the first is initiated using a
ClusterRpcProxy context manager. I'd like to restart the process that
creates that proxy and somehow reestablish the connection so that I can get
the result back in the restarted process. Unfortunately I can't pickle the
result object either (there seems to be an eventlet object in the way of
this). Anyway I'd also be interested in any information or tricks anyone
has to "persist" job calls.

Best,
Davis

Am Freitag, 25. August 2017 17:06:14 UTC+2 schrieb David Szotten:

Hi,

The pseudo code you included (sending a `RpcReply` to another service) is
not possible. The RpcReply contains metadata to get replies, but only
within the requesting service (each service has its own reply queue where
any called service sends replies, so in your example, the logging service
cannot read replies sent to a different service.)

However, there are other patterns you can use to get a similar effect.
Depending on your setup you could for example change the setup so that
instead of the service doing the work "replying" with the result, it sends
it to the logging service, i.e.

class Internal
    def do_work(...)
        result = work()
        self.logging_rpc.report_result(result)
        return None
Best,
David

On Tuesday, 22 August 2017 18:48:29 UTC+1, tcinn...@gmail.com wrote:

If I had a possibly long rpc call that I am calling async

long = self.internal_rpc.func.call_async(args)

and I log the results of this to a database with another microservice

self.logging_rpc.log(long.result())

It it common to pass the rpcreply to another service and wait in that
service? Like:

long = self.internal_rpc.func.call_async(args)
self.logging_rpc.log(long) #NOTICE this service is now required to
handle waiting logic
return ...

Is this common / possible?