Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc
from nameko_cachetools import CachedRpcProxy
class Service(object):
name = "demo"
other_service = CachedRpcProxy('other_service', failover_timeout=3)
@rpc
def do_something(self, request):
# this rpc response will be cached, further queries will be
# timed and cached values will be returned if no response is
# received in 3 seconds or an exception is raised at the
# destination service
other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
1 Like
Looks like a good tool to use with config managers, if used as services.
···
2018-06-11 14:18 GMT-03:00 <santi@blameless.com>:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense.
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc
from nameko_cachetools import CachedRpcProxy
class Service(object):
name = "demo"
other_service = CachedRpcProxy('other_service', failover_timeout=3)
@rpc
def do_something(self, request):
# this rpc response will be cached, further queries will be
# timed and cached values will be returned if no response is
# received in 3 seconds or an exception is raised at the
# destination service
other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/
msgid/nameko-dev/57e23c2b-9e2f-46e4-bea8-728114b0def9%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/57e23c2b-9e2f-46e4-bea8-728114b0def9%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
This is really awesome. I've added it to the community extensions page of
the docs.
You might want to keep an eye out of the 3.x release of Nameko that's in
the works. Amongst other things it'll
include https://github.com/nameko/nameko/pull/542, which rearranges the
internals of the RPC proxy (ServiceProxy and MethodProxy have been combined
into a single object). nameko-cachetools will need some small tweaks to
accommodate these changes.
···
On Monday, June 11, 2018 at 6:18:42 PM UTC+1, sa...@blameless.com wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense.
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc
from nameko_cachetools import CachedRpcProxy
class Service(object):
name = "demo"
other_service = CachedRpcProxy('other_service', failover_timeout=3)
@rpc
def do_something(self, request):
# this rpc response will be cached, further queries will be
# timed and cached values will be returned if no response is
# received in 3 seconds or an exception is raised at the
# destination service
other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
@santiago Thank you for releasing this. We do a similar thing here, with a
central config manager (heavily leveraging YAML templates) that multiple
services call to.
Some of the configs reference filesystem resources, than can be expensive
in terms of IO. Currently I do that as late as possible, to the detriment
of reporting. This little addition means I can do it much earlier, and
cache the result.
Thank you!
Geoff
···
On Monday, June 11, 2018 at 10:18:42 AM UTC-7, sa...@blameless.com wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense.
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc
from nameko_cachetools import CachedRpcProxy
class Service(object):
name = "demo"
other_service = CachedRpcProxy('other_service', failover_timeout=3)
@rpc
def do_something(self, request):
# this rpc response will be cached, further queries will be
# timed and cached values will be returned if no response is
# received in 3 seconds or an exception is raised at the
# destination service
other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
Yeah, that was our actual use case. All our system's settings are centralized in a service and others pull those on demand when needed. Caching was essential to improve reliability of the whole thing if settings went down.
···
On Mon, Jun 11, 2018 at 11:03 AM, Guilherme Caminha < gpkc@cin.ufpe.br > wrote:
Looks like a good tool to use with config managers, if used as services.
2018-06-11 14:18 GMT-03:00 < santi@ blameless. com ( santi@blameless.com )
> :
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
santiycr (Santiago Suarez Ordoñez) · GitHub nameko-cachetools (
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense. )
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc from nameko_cachetools import CachedRpcProxy
class Service(object): name = "demo" other_service =
CachedRpcProxy('other_service' , failover_timeout=3) @rpc def
do_something(self, request): # this rpc response will be cached, further
queries will be # timed and cached values will be returned if no response
is # received in 3 seconds or an exception is raised at the # destination
service other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+unsubscribe@ googlegroups.com (
nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https://groups.google.com/d/ msgid/nameko-dev/57e23c2b-
9e2f-46e4-bea8-728114b0def9% 40googlegroups.com (
https://groups.google.com/d/msgid/nameko-dev/57e23c2b-9e2f-46e4-bea8-728114b0def9%40googlegroups.com?utm_medium=email&utm_source=footer
).
For more options, visit https://groups.google.com/d/ optout (
https://groups.google.com/d/optout ).
Sounds good! I'll tweak things to make sure the right version of cachetools
can track each major version!
Thanks for the heads up
···
On Wed, Jun 13, 2018, 05:34 Matt Yule-Bennett <bennett.matthew@gmail.com> wrote:
This is really awesome. I've added it to the community extensions page of
the docs.
You might want to keep an eye out of the 3.x release of Nameko that's in
the works. Amongst other things it'll include
Remove queue consumer by mattbennett · Pull Request #542 · nameko/nameko · GitHub, which rearranges the internals
of the RPC proxy (ServiceProxy and MethodProxy have been combined into a
single object). nameko-cachetools will need some small tweaks to
accommodate these changes.
On Monday, June 11, 2018 at 6:18:42 PM UTC+1, sa...@blameless.com wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense.
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc
from nameko_cachetools import CachedRpcProxy
class Service(object):
name = "demo"
other_service = CachedRpcProxy('other_service', failover_timeout=3)
@rpc
def do_something(self, request):
# this rpc response will be cached, further queries will be
# timed and cached values will be returned if no response is
# received in 3 seconds or an exception is raised at the
# destination service
other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe\.
To unsubscribe from this group and all its topics, send an email to
nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit
https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
Happy to hear that, Geoff. Contributions are always welcome!
···
On Mon, Jul 02, 2018 at 1:42 PM, < jukowitz@gmail.com > wrote:
@santiago Thank you for releasing this. We do a similar thing here, with a
central config manager (heavily leveraging YAML templates) that multiple
services call to.
Some of the configs reference filesystem resources, than can be expensive
in terms of IO. Currently I do that as late as possible, to the detriment
of reporting. This little addition means I can do it much earlier, and
cache the result.
Thank you!
Geoff
On Monday, June 11, 2018 at 10:18:42 AM UTC-7, sa... @ blameless. com ( > http://@blameless.com/ ) wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
santiycr (Santiago Suarez Ordoñez) · GitHub nameko-cachetools (
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense. )
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc from nameko_cachetools import CachedRpcProxy
class Service(object): name = "demo" other_service =
CachedRpcProxy('other_service' , failover_timeout=3) @rpc def
do_something(self, request): # this rpc response will be cached, further
queries will be # timed and cached values will be returned if no response
is # received in 3 seconds or an exception is raised at the # destination
service other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https:/ / groups. google. com/ d/ topic/
nameko-dev/ gvgr-tB5MW4/ unsubscribe (
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe ).
To unsubscribe from this group and all its topics, send an email to nameko-dev+unsubscribe@
googlegroups. com ( nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https:/ / groups. google. com/ d/
msgid/ nameko-dev/ 7f0f5788-187f-4b0e-b6e5-793a5863871f%40googlegroups. com
(
https://groups.google.com/d/msgid/nameko-dev/7f0f5788-187f-4b0e-b6e5-793a5863871f%40googlegroups.com?utm_medium=email&utm_source=footer
).
For more options, visit https:/ / groups. google. com/ d/ optout (
https://groups.google.com/d/optout ).
I've been using this tool for remote config management:
GitHub - dynaconf/dynaconf: Configuration Management for Python ⚙ could be interesting to build it as
a dependency provider in the future.
···
2018-06-11 15:25 GMT-03:00 Santiago Suarez Ordoñez <santi@blameless.com>:
Yeah, that was our actual use case. All our system's settings are
centralized in a service and others pull those on demand when needed.
Caching was essential to improve reliability of the whole thing if settings
went down.
On Mon, Jun 11, 2018 at 11:03 AM, Guilherme Caminha<gpkc@cin.ufpe.br>wrote
:
Looks like a good tool to use with config managers, if used as services.
2018-06-11 14:18 GMT-03:00 <santi@blameless.com>:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense.
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc
from nameko_cachetools import CachedRpcProxy
class Service(object):
name = "demo"
other_service = CachedRpcProxy('other_service', failover_timeout=3)
@rpc
def do_something(self, request):
# this rpc response will be cached, further queries will be
# timed and cached values will be returned if no response is
# received in 3 seconds or an exception is raised at the
# destination service
other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to the Google
Groups "nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/ms
gid/nameko-dev/57e23c2b-9e2f-46e4-bea8-728114b0def9%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/57e23c2b-9e2f-46e4-bea8-728114b0def9%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
I was wondering something now - is it possible for a dependency provider to
access others? For example, suppose I want to have a database dependency
provider. It would be interesting to use this to manage the access to the
database. Is using a ClusterRpcProxy or something like that from within the
dependency provider implementation a good way?
Ideally, though, even the connection to RabbitMQ should be handled using a
centralized settings manager.
···
2018-06-13 10:54 GMT-03:00 Santiago Suarez Ordoñez <santi@blameless.com>:
Sounds good! I'll tweak things to make sure the right version of
cachetools can track each major version!
Thanks for the heads up
On Wed, Jun 13, 2018, 05:34 Matt Yule-Bennett <bennett.matthew@gmail.com> > wrote:
This is really awesome. I've added it to the community extensions page of
the docs.
You might want to keep an eye out of the 3.x release of Nameko that's in
the works. Amongst other things it'll include https://github.com/
nameko/nameko/pull/542, which rearranges the internals of the RPC proxy
(ServiceProxy and MethodProxy have been combined into a single object).
nameko-cachetools will need some small tweaks to accommodate these changes.
On Monday, June 11, 2018 at 6:18:42 PM UTC+1, sa...@blameless.com wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense.
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc
from nameko_cachetools import CachedRpcProxy
class Service(object):
name = "demo"
other_service = CachedRpcProxy('other_service', failover_timeout=3)
@rpc
def do_something(self, request):
# this rpc response will be cached, further queries will be
# timed and cached values will be returned if no response is
# received in 3 seconds or an exception is raised at the
# destination service
other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/nameko-dev/gvgr-tB5MW4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/
msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
--
You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/
msgid/nameko-dev/CAEqTAZajuwDwsUgRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%
40mail.gmail.com
<https://groups.google.com/d/msgid/nameko-dev/CAEqTAZajuwDwsUgRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
If you have two dependencies that you want to use together (e.g. RabbitMQ
and a database) you have two good options:
1. Glue them together using a service method. This is the "obvious" way but
will feel wrong if you want to have the interaction abstracted away and not
cluttering up your service.
2. Make a combined DependencyProvider that talks to both dependencies. This
will nicely abstract them away behind a single interface.
A third option is to use a 'contract" of sorts between DPs. All bound
Extensions have access to the others via the service container. so you can
access another DP by name if it exists. This couples the extensions
together though, so is not my preference.
The downside of the "combined" DP is that you may end up with duplicate
resources. For example, your combined DP with an RPC proxy may be used in a
service that also declares an RPC proxy. This may feel slightly wasteful
but I generally prefer it for the cleaner interface that it gives you.
Decent chunks of the "duplicate" resources are often shared at the process
level anway, so it's not as wasteful as it may feel. For example Kombu uses
a process level connection pool; even if you have duplicate message
publishers, ultimately they won't be using many more underlying
connections, which are the expensive bit.
I guess the use-case here is a DependencyProvider that configures itself
using settings fetched over RPC? In this case I would definitely recommend
an embedded RPC proxy -- the reason is that you really want this setup to
happen "out of band". The initial setup would need to happen during
container setup, at which time the other extensions may not be ready to use.
···
On Thursday, June 14, 2018 at 3:11:35 AM UTC+1, Guilherme Caminha wrote:
I was wondering something now - is it possible for a dependency provider
to access others? For example, suppose I want to have a database dependency
provider. It would be interesting to use this to manage the access to the
database. Is using a ClusterRpcProxy or something like that from within the
dependency provider implementation a good way?
Ideally, though, even the connection to RabbitMQ should be handled using a
centralized settings manager.
2018-06-13 10:54 GMT-03:00 Santiago Suarez Ordoñez <santi@blameless.com>:
Sounds good! I'll tweak things to make sure the right version of
cachetools can track each major version!
Thanks for the heads up
On Wed, Jun 13, 2018, 05:34 Matt Yule-Bennett <bennett.matthew@gmail.com> >> wrote:
This is really awesome. I've added it to the community extensions page
of the docs.
You might want to keep an eye out of the 3.x release of Nameko that's in
the works. Amongst other things it'll include
https://github.com/nameko/nameko/pull/542, which rearranges the
internals of the RPC proxy (ServiceProxy and MethodProxy have been combined
into a single object). nameko-cachetools will need some small tweaks to
accommodate these changes.
On Monday, June 11, 2018 at 6:18:42 PM UTC+1, sa...@blameless.com wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense.
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc
from nameko_cachetools import CachedRpcProxy
class Service(object):
name = "demo"
other_service = CachedRpcProxy('other_service', failover_timeout=3)
@rpc
def do_something(self, request):
# this rpc response will be cached, further queries will be
# timed and cached values will be returned if no response is
# received in 3 seconds or an exception is raised at the
# destination service
other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe\.
To unsubscribe from this group and all its topics, send an email to
nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit
https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
--
You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit
https://groups.google.com/d/msgid/nameko-dev/CAEqTAZajuwDwsUgRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%40mail.gmail.com
<https://groups.google.com/d/msgid/nameko-dev/CAEqTAZajuwDwsUgRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
A small point of order: I think it's easier to follow (and later, find) the
discussions here if new questions are sent as new messages rather than
replies to existing discussions
Thanks,
David
This answer is super relevant and something we struggled with quite a bit for the same reason Guilherme pointed out. Thanks for the tip! Could we drop this into the docs somewhere? Maybe an FAQ or Troubleshooting section. I'd be happy to get that started and send a PR if you guys this it's a good idea.
···
On Thu, Jun 14, 2018 at 12:37 AM, Matt Yule-Bennett < bennett.matthew@gmail.com > wrote:
If you have two dependencies that you want to use together (e.g. RabbitMQ
and a database) you have two good options:
1. Glue them together using a service method. This is the "obvious" way
but will feel wrong if you want to have the interaction abstracted away
and not cluttering up your service.
2. Make a combined DependencyProvider that talks to both dependencies.
This will nicely abstract them away behind a single interface.
A third option is to use a 'contract" of sorts between DPs. All bound
Extensions have access to the others via the service container. so you can
access another DP by name if it exists. This couples the extensions
together though, so is not my preference.
The downside of the "combined" DP is that you may end up with duplicate
resources. For example, your combined DP with an RPC proxy may be used in
a service that also declares an RPC proxy. This may feel slightly wasteful
but I generally prefer it for the cleaner interface that it gives you.
Decent chunks of the "duplicate" resources are often shared at the process
level anway, so it's not as wasteful as it may feel. For example Kombu
uses a process level connection pool; even if you have duplicate message
publishers, ultimately they won't be using many more underlying
connections, which are the expensive bit.
I guess the use-case here is a DependencyProvider that configures itself
using settings fetched over RPC? In this case I would definitely recommend
an embedded RPC proxy -- the reason is that you really want this setup to
happen "out of band". The initial setup would need to happen during
container setup, at which time the other extensions may not be ready to
use.
On Thursday, June 14, 2018 at 3:11:35 AM UTC+1, Guilherme Caminha wrote:
I was wondering something now - is it possible for a dependency provider
to access others? For example, suppose I want to have a database
dependency provider. It would be interesting to use this to manage the
access to the database. Is using a ClusterRpcProxy or something like that
from within the dependency provider implementation a good way?
Ideally, though, even the connection to RabbitMQ should be handled using a
centralized settings manager.
2018-06-13 10:54 GMT-03:00 Santiago Suarez Ordoñez < santi@ blameless. com
( santi@blameless.com ) > :
Sounds good! I'll tweak things to make sure the right version of
cachetools can track each major version!
Thanks for the heads up
On Wed, Jun 13, 2018, 05:34 Matt Yule-Bennett < bennett. matthew@ gmail. com >>> ( bennett.matthew@gmail.com ) > wrote:
This is really awesome. I've added it to the community extensions page of
the docs.
You might want to keep an eye out of the 3.x release of Nameko that's in
the works. Amongst other things it'll include https://github.com/ nameko/nameko/pull/542
( https://github.com/nameko/nameko/pull/542 ) , which rearranges the
internals of the RPC proxy (ServiceProxy and MethodProxy have been
combined into a single object). nameko-cachetools will need some small
tweaks to accommodate these changes.
On Monday, June 11, 2018 at 6:18:42 PM UTC+1, sa... @ blameless. com wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
santiycr (Santiago Suarez Ordoñez) · GitHub nameko-cachetools (
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense. )
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc from nameko_cachetools import CachedRpcProxy
class Service(object): name = "demo" other_service =
CachedRpcProxy('other_service' , failover_timeout=3) @rpc def
do_something(self, request): # this rpc response will be cached, further
queries will be # timed and cached values will be returned if no response
is # received in 3 seconds or an exception is raised at the # destination
service other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/ topic/nameko-dev/gvgr-tB5MW4/
unsubscribe (
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe ).
To unsubscribe from this group and all its topics, send an email to nameko-dev+unsubscribe@
googlegroups.com ( nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https://groups.google.com/d/ msgid/nameko-dev/bc3ecae0-
a571-40ae-b967-857f41bd6755% 40googlegroups.com (
https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com?utm_medium=email&utm_source=footer
).
For more options, visit https://groups.google.com/d/ optout (
https://groups.google.com/d/optout ).
--
You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+unsubscribe@ googlegroups.com (
nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https://groups.google.com/d/ msgid/nameko-dev/
CAEqTAZajuwDwsUgRqBgL% 2BFRuaAP% 2B9LCjgprMpHgLo1f3hjBaZg% 40mail.gmail.com
(
https://groups.google.com/d/msgid/nameko-dev/CAEqTAZajuwDwsUgRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%40mail.gmail.com?utm_medium=email&utm_source=footer
).
For more options, visit https://groups.google.com/d/ optout (
https://groups.google.com/d/optout ).
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https:/ / groups. google. com/ d/ topic/
nameko-dev/ gvgr-tB5MW4/ unsubscribe (
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe ).
To unsubscribe from this group and all its topics, send an email to nameko-dev+unsubscribe@
googlegroups. com ( nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https:/ / groups. google. com/ d/
msgid/ nameko-dev/ a5b78318-2097-41c8-b641-55617fd30694%40googlegroups. com
(
https://groups.google.com/d/msgid/nameko-dev/a5b78318-2097-41c8-b641-55617fd30694%40googlegroups.com?utm_medium=email&utm_source=footer
).
For more options, visit https:/ / groups. google. com/ d/ optout (
https://groups.google.com/d/optout ).
Santiago, this might be a little off-topic (sorry for that) but could you
share what tool you're using for your centralized settings?
···
2018-06-14 14:29 GMT-03:00 Santiago Suarez Ordoñez <santi@blameless.com>:
This answer is super relevant and something we struggled with quite a bit
for the same reason Guilherme pointed out. Thanks for the tip! Could we
drop this into the docs somewhere? Maybe an FAQ or Troubleshooting section.
I'd be happy to get that started and send a PR if you guys this it's a good
idea.
On Thu, Jun 14, 2018 at 12:37 AM, Matt Yule-Bennett<bennett.matthew@ > gmail.com>wrote:
If you have two dependencies that you want to use together (e.g. RabbitMQ
and a database) you have two good options:
1. Glue them together using a service method. This is the "obvious" way
but will feel wrong if you want to have the interaction abstracted away and
not cluttering up your service.
2. Make a combined DependencyProvider that talks to both dependencies.
This will nicely abstract them away behind a single interface.
A third option is to use a 'contract" of sorts between DPs. All bound
Extensions have access to the others via the service container. so you can
access another DP by name if it exists. This couples the extensions
together though, so is not my preference.
The downside of the "combined" DP is that you may end up with duplicate
resources. For example, your combined DP with an RPC proxy may be used in a
service that also declares an RPC proxy. This may feel slightly wasteful
but I generally prefer it for the cleaner interface that it gives you.
Decent chunks of the "duplicate" resources are often shared at the process
level anway, so it's not as wasteful as it may feel. For example Kombu uses
a process level connection pool; even if you have duplicate message
publishers, ultimately they won't be using many more underlying
connections, which are the expensive bit.
I guess the use-case here is a DependencyProvider that configures itself
using settings fetched over RPC? In this case I would definitely recommend
an embedded RPC proxy -- the reason is that you really want this setup to
happen "out of band". The initial setup would need to happen during
container setup, at which time the other extensions may not be ready to use.
On Thursday, June 14, 2018 at 3:11:35 AM UTC+1, Guilherme Caminha wrote:
I was wondering something now - is it possible for a dependency provider
to access others? For example, suppose I want to have a database dependency
provider. It would be interesting to use this to manage the access to the
database. Is using a ClusterRpcProxy or something like that from within the
dependency provider implementation a good way?
Ideally, though, even the connection to RabbitMQ should be handled using
a centralized settings manager.
2018-06-13 10:54 GMT-03:00 Santiago Suarez Ordoñez <santi@blameless.com>
:
Sounds good! I'll tweak things to make sure the right version of
cachetools can track each major version!
Thanks for the heads up
On Wed, Jun 13, 2018, 05:34 Matt Yule-Bennett < >>>> bennett.matthew@gmail.com> wrote:
This is really awesome. I've added it to the community extensions page
of the docs.
You might want to keep an eye out of the 3.x release of Nameko that's
in the works. Amongst other things it'll include
https://github.com/nameko/nameko/pull/542, which rearranges the
internals of the RPC proxy (ServiceProxy and MethodProxy have been combined
into a single object). nameko-cachetools will need some small tweaks to
accommodate these changes.
On Monday, June 11, 2018 at 6:18:42 PM UTC+1, sa...@blameless.com >>>>> wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense.
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc
from nameko_cachetools import CachedRpcProxy
class Service(object):
name = "demo"
other_service = CachedRpcProxy('other_service', failover_timeout=3)
@rpc
def do_something(self, request):
# this rpc response will be cached, further queries will be
# timed and cached values will be returned if no response is
# received in 3 seconds or an exception is raised at the
# destination service
other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache
if something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/to
pic/nameko-dev/gvgr-tB5MW4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit
https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-
40ae-b967-857f41bd6755%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
--
You received this message because you are subscribed to the Google
Groups "nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit
https://groups.google.com/d/msgid/nameko-dev/CAEqTAZajuwDwsU
gRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%40mail.gmail.com
<https://groups.google.com/d/msgid/nameko-dev/CAEqTAZajuwDwsUgRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/to
pic/nameko-dev/gvgr-tB5MW4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/ms
gid/nameko-dev/a5b78318-2097-41c8-b641-55617fd30694%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/a5b78318-2097-41c8-b641-55617fd30694%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
--
You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/
msgid/nameko-dev/jietgx5y.2709b9be-95eb-44b9-b36e-c44f9931765a%40we.are.
superhuman.com
<https://groups.google.com/d/msgid/nameko-dev/jietgx5y.2709b9be-95eb-44b9-b36e-c44f9931765a%40we.are.superhuman.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
It's actually just another nameko service! It uses AES encryption for sensitive keys and all settings are stored in a standard DB (mongodb right now). Settings are pulled from other services using RPC and cached using the package I just shared. Depending on the setting, we use TTL caches or just flat out dicts based on the level of consistency we want to have and the cost of a mishap. Happy to answer more directly if you want to email me
···
On Thu, Jun 14, 2018 at 10:32 AM, Guilherme Caminha < gpkc@cin.ufpe.br > wrote:
Santiago, this might be a little off-topic (sorry for that) but could you
share what tool you're using for your centralized settings?
2018-06-14 14:29 GMT-03:00 Santiago Suarez Ordoñez < santi@ blameless. com
( santi@blameless.com ) > :
This answer is super relevant and something we struggled with quite a bit
for the same reason Guilherme pointed out. Thanks for the tip! Could we
drop this into the docs somewhere? Maybe an FAQ or Troubleshooting
section. I'd be happy to get that started and send a PR if you guys this
it's a good idea.
On Thu, Jun 14, 2018 at 12:37 AM, Matt Yule-Bennett < bennett.matthew@ gmail.com >> ( bennett.matthew@gmail.com ) > wrote:
If you have two dependencies that you want to use together (e.g. RabbitMQ
and a database) you have two good options:
1. Glue them together using a service method. This is the "obvious" way
but will feel wrong if you want to have the interaction abstracted away
and not cluttering up your service.
2. Make a combined DependencyProvider that talks to both dependencies.
This will nicely abstract them away behind a single interface.
A third option is to use a 'contract" of sorts between DPs. All bound
Extensions have access to the others via the service container. so you can
access another DP by name if it exists. This couples the extensions
together though, so is not my preference.
The downside of the "combined" DP is that you may end up with duplicate
resources. For example, your combined DP with an RPC proxy may be used in
a service that also declares an RPC proxy. This may feel slightly wasteful
but I generally prefer it for the cleaner interface that it gives you.
Decent chunks of the "duplicate" resources are often shared at the process
level anway, so it's not as wasteful as it may feel. For example Kombu
uses a process level connection pool; even if you have duplicate message
publishers, ultimately they won't be using many more underlying
connections, which are the expensive bit.
I guess the use-case here is a DependencyProvider that configures itself
using settings fetched over RPC? In this case I would definitely recommend
an embedded RPC proxy -- the reason is that you really want this setup to
happen "out of band". The initial setup would need to happen during
container setup, at which time the other extensions may not be ready to
use.
On Thursday, June 14, 2018 at 3:11:35 AM UTC+1, Guilherme Caminha wrote:
I was wondering something now - is it possible for a dependency provider
to access others? For example, suppose I want to have a database
dependency provider. It would be interesting to use this to manage the
access to the database. Is using a ClusterRpcProxy or something like that
from within the dependency provider implementation a good way?
Ideally, though, even the connection to RabbitMQ should be handled using a
centralized settings manager.
2018-06-13 10:54 GMT-03:00 Santiago Suarez Ordoñez < santi@ blameless. com
( santi@blameless.com ) > :
Sounds good! I'll tweak things to make sure the right version of
cachetools can track each major version!
Thanks for the heads up
On Wed, Jun 13, 2018, 05:34 Matt Yule-Bennett < bennett. matthew@ gmail. com >>>>> ( bennett.matthew@gmail.com ) > wrote:
This is really awesome. I've added it to the community extensions page of
the docs.
You might want to keep an eye out of the 3.x release of Nameko that's in
the works. Amongst other things it'll include nam · GitHub eko/nameko/pull/542
( Remove queue consumer by mattbennett · Pull Request #542 · nameko/nameko · GitHub ) , which rearranges the
internals of the RPC proxy (ServiceProxy and MethodProxy have been
combined into a single object). nameko-cachetools will need some small
tweaks to accommodate these changes.
On Monday, June 11, 2018 at 6:18:42 PM UTC+1, sa... @ blameless. com wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
https://github.com/santiycr/na meko-cachetools (
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense. )
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc from nameko_cachetools import CachedRpcProxy
class Service(object): name = "demo" other_service =
CachedRpcProxy('other_service' , failover_timeout=3) @rpc def
do_something(self, request): # this rpc response will be cached, further
queries will be # timed and cached values will be returned if no response
is # received in 3 seconds or an exception is raised at the # destination
service other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/to pic/nameko-dev/gvgr-tB5MW4/uns
ubscribe (
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe ).
To unsubscribe from this group and all its topics, send an email to nameko-dev+unsubscribe@googleg
roups.com ( nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https://groups.google.com/d/ms gid/nameko-dev/bc3ecae0-a571-
40ae-b967-857f41bd6755%40googl egroups.com (
https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com?utm_medium=email&utm_source=footer
).
For more options, visit https://groups.google.com/d/op tout (
https://groups.google.com/d/optout ).
--
You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+unsubscribe@googleg roups.com (
nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https://groups.google.com/d/ms gid/nameko-dev/CAEqTAZajuwDwsU
gRqBgL%2BFRuaAP%2B9LCjgprMpHgL o1f3hjBaZg%40mail.gmail.com (
https://groups.google.com/d/msgid/nameko-dev/CAEqTAZajuwDwsUgRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%40mail.gmail.com?utm_medium=email&utm_source=footer
).
For more options, visit https://groups.google.com/d/op tout (
https://groups.google.com/d/optout ).
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/to pic/nameko-dev/gvgr-tB5MW4/uns
ubscribe (
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe ).
To unsubscribe from this group and all its topics, send an email to nameko-dev+unsubscribe@googleg
roups.com ( nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https://groups.google.com/d/ms gid/nameko-dev/a5b78318-2097-
41c8-b641-55617fd30694% 40googlegroups.com (
https://groups.google.com/d/msgid/nameko-dev/a5b78318-2097-41c8-b641-55617fd30694%40googlegroups.com?utm_medium=email&utm_source=footer
).
For more options, visit https://groups.google.com/d/op tout (
https://groups.google.com/d/optout ).
--
You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+unsubscribe@ googlegroups.com (
nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https://groups.google.com/d/ msgid/nameko-dev/jietgx5y.
2709b9be-95eb-44b9-b36e- c44f9931765a%40we.are. superhuman.com (
https://groups.google.com/d/msgid/nameko-dev/jietgx5y.2709b9be-95eb-44b9-b36e-c44f9931765a%40we.are.superhuman.com?utm_medium=email&utm_source=footer
).
For more options, visit https://groups.google.com/d/ optout (
https://groups.google.com/d/optout ).
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https:/ / groups. google. com/ d/ topic/
nameko-dev/ gvgr-tB5MW4/ unsubscribe (
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe ).
To unsubscribe from this group and all its topics, send an email to nameko-dev+unsubscribe@
googlegroups. com ( nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https:/ / groups. google. com/ d/
msgid/ nameko-dev/ CAJ23YE95J0WL6uscF8xMoJyvSqNpxOcFOCr5cWZBicVsc%2B-mYw%40mail.
gmail. com (
https://groups.google.com/d/msgid/nameko-dev/CAJ23YE95J0WL6uscF8xMoJyvSqNpxOcFOCr5cWZBicVsc%2B-mYw%40mail.gmail.com?utm_medium=email&utm_source=footer
).
For more options, visit https:/ / groups. google. com/ d/ optout (
https://groups.google.com/d/optout ).
Absolutely, would love to have these kinds of things documented and
categorised somewhere.
I think an FAQ would be a good place to start. Most issues raised on Github
are questions and it'd be great to have the answers compiled into one place.
I have actually just applied to Discourse's programme for free hosting for
open-source projects. If they come through that would be a great place to
put them. Until then maybe a Github wiki page?
···
On Thursday, June 14, 2018 at 6:29:04 PM UTC+1, Santiago Suarez Ordoñez wrote:
This answer is super relevant and something we struggled with quite a bit
for the same reason Guilherme pointed out. Thanks for the tip! Could we
drop this into the docs somewhere? Maybe an FAQ or Troubleshooting section.
I'd be happy to get that started and send a PR if you guys this it's a good
idea.
On Thu, Jun 14, 2018 at 12:37 AM, Matt Yule-Bennett< > bennett.matthew@gmail.com>wrote:
If you have two dependencies that you want to use together (e.g. RabbitMQ
and a database) you have two good options:
1. Glue them together using a service method. This is the "obvious" way
but will feel wrong if you want to have the interaction abstracted away and
not cluttering up your service.
2. Make a combined DependencyProvider that talks to both dependencies.
This will nicely abstract them away behind a single interface.
A third option is to use a 'contract" of sorts between DPs. All bound
Extensions have access to the others via the service container. so you can
access another DP by name if it exists. This couples the extensions
together though, so is not my preference.
The downside of the "combined" DP is that you may end up with duplicate
resources. For example, your combined DP with an RPC proxy may be used in a
service that also declares an RPC proxy. This may feel slightly wasteful
but I generally prefer it for the cleaner interface that it gives you.
Decent chunks of the "duplicate" resources are often shared at the process
level anway, so it's not as wasteful as it may feel. For example Kombu uses
a process level connection pool; even if you have duplicate message
publishers, ultimately they won't be using many more underlying
connections, which are the expensive bit.
I guess the use-case here is a DependencyProvider that configures itself
using settings fetched over RPC? In this case I would definitely recommend
an embedded RPC proxy -- the reason is that you really want this setup to
happen "out of band". The initial setup would need to happen during
container setup, at which time the other extensions may not be ready to use.
On Thursday, June 14, 2018 at 3:11:35 AM UTC+1, Guilherme Caminha wrote:
I was wondering something now - is it possible for a dependency provider
to access others? For example, suppose I want to have a database dependency
provider. It would be interesting to use this to manage the access to the
database. Is using a ClusterRpcProxy or something like that from within the
dependency provider implementation a good way?
Ideally, though, even the connection to RabbitMQ should be handled using
a centralized settings manager.
2018-06-13 10:54 GMT-03:00 Santiago Suarez Ordoñez <santi@blameless.com>
:
Sounds good! I'll tweak things to make sure the right version of
cachetools can track each major version!
Thanks for the heads up
On Wed, Jun 13, 2018, 05:34 Matt Yule-Bennett < >>>> bennett.matthew@gmail.com> wrote:
This is really awesome. I've added it to the community extensions page
of the docs.
You might want to keep an eye out of the 3.x release of Nameko that's
in the works. Amongst other things it'll include
https://github.com/nameko/nameko/pull/542, which rearranges the
internals of the RPC proxy (ServiceProxy and MethodProxy have been combined
into a single object). nameko-cachetools will need some small tweaks to
accommodate these changes.
On Monday, June 11, 2018 at 6:18:42 PM UTC+1, sa...@blameless.com >>>>> wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense.
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc
from nameko_cachetools import CachedRpcProxy
class Service(object):
name = "demo"
other_service = CachedRpcProxy('other_service', failover_timeout=3)
@rpc
def do_something(self, request):
# this rpc response will be cached, further queries will be
# timed and cached values will be returned if no response is
# received in 3 seconds or an exception is raised at the
# destination service
other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache
if something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe\.
To unsubscribe from this group and all its topics, send an email to
nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit
https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
--
You received this message because you are subscribed to the Google
Groups "nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit
https://groups.google.com/d/msgid/nameko-dev/CAEqTAZajuwDwsUgRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%40mail.gmail.com
<https://groups.google.com/d/msgid/nameko-dev/CAEqTAZajuwDwsUgRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe\.
To unsubscribe from this group and all its topics, send an email to
nameko-dev+unsubscribe@googlegroups.com.
To post to this group, send email to nameko-dev@googlegroups.com.
To view this discussion on the web, visit
https://groups.google.com/d/msgid/nameko-dev/a5b78318-2097-41c8-b641-55617fd30694%40googlegroups.com
<https://groups.google.com/d/msgid/nameko-dev/a5b78318-2097-41c8-b641-55617fd30694%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout\.
Done! FAQ Troubleshooting · nameko/nameko Wiki · GitHub
···
On Fri, Jun 15, 2018 at 2:31 AM, Matt Yule-Bennett < bennett.matthew@gmail.com > wrote:
Absolutely, would love to have these kinds of things documented and
categorised somewhere.
I think an FAQ would be a good place to start. Most issues raised on
Github are questions and it'd be great to have the answers compiled into
one place.
I have actually just applied to Discourse's programme for free hosting for
open-source projects. If they come through that would be a great place to
put them. Until then maybe a Github wiki page?
On Thursday, June 14, 2018 at 6:29:04 PM UTC+1, Santiago Suarez Ordoñez > wrote:
This answer is super relevant and something we struggled with quite a bit
for the same reason Guilherme pointed out. Thanks for the tip! Could we
drop this into the docs somewhere? Maybe an FAQ or Troubleshooting
section. I'd be happy to get that started and send a PR if you guys this
it's a good idea.
On Thu, Jun 14, 2018 at 12:37 AM, Matt Yule-Bennett < bennett.matthew@ gmail.com >> ( bennett.matthew@gmail.com ) > wrote:
If you have two dependencies that you want to use together (e.g. RabbitMQ
and a database) you have two good options:
1. Glue them together using a service method. This is the "obvious" way
but will feel wrong if you want to have the interaction abstracted away
and not cluttering up your service.
2. Make a combined DependencyProvider that talks to both dependencies.
This will nicely abstract them away behind a single interface.
A third option is to use a 'contract" of sorts between DPs. All bound
Extensions have access to the others via the service container. so you can
access another DP by name if it exists. This couples the extensions
together though, so is not my preference.
The downside of the "combined" DP is that you may end up with duplicate
resources. For example, your combined DP with an RPC proxy may be used in
a service that also declares an RPC proxy. This may feel slightly wasteful
but I generally prefer it for the cleaner interface that it gives you.
Decent chunks of the "duplicate" resources are often shared at the process
level anway, so it's not as wasteful as it may feel. For example Kombu
uses a process level connection pool; even if you have duplicate message
publishers, ultimately they won't be using many more underlying
connections, which are the expensive bit.
I guess the use-case here is a DependencyProvider that configures itself
using settings fetched over RPC? In this case I would definitely recommend
an embedded RPC proxy -- the reason is that you really want this setup to
happen "out of band". The initial setup would need to happen during
container setup, at which time the other extensions may not be ready to
use.
On Thursday, June 14, 2018 at 3:11:35 AM UTC+1, Guilherme Caminha wrote:
I was wondering something now - is it possible for a dependency provider
to access others? For example, suppose I want to have a database
dependency provider. It would be interesting to use this to manage the
access to the database. Is using a ClusterRpcProxy or something like that
from within the dependency provider implementation a good way?
Ideally, though, even the connection to RabbitMQ should be handled using a
centralized settings manager.
2018-06-13 10:54 GMT-03:00 Santiago Suarez Ordoñez < santi@ blameless. com
( santi@blameless.com ) > :
Sounds good! I'll tweak things to make sure the right version of
cachetools can track each major version!
Thanks for the heads up
On Wed, Jun 13, 2018, 05:34 Matt Yule-Bennett < bennett. matthew@ gmail. com >>>>> ( bennett.matthew@gmail.com ) > wrote:
This is really awesome. I've added it to the community extensions page of
the docs.
You might want to keep an eye out of the 3.x release of Nameko that's in
the works. Amongst other things it'll include https://github.com/ nameko/nameko/pull/542
( https://github.com/nameko/nameko/pull/542 ) , which rearranges the
internals of the RPC proxy (ServiceProxy and MethodProxy have been
combined into a single object). nameko-cachetools will need some small
tweaks to accommodate these changes.
On Monday, June 11, 2018 at 6:18:42 PM UTC+1, sa... @ blameless. com wrote:
Hey everyone, I figured I'd announce here that I recently worked on a
caching RPC implementation for nameko services and decided to make it open
source for everyone to look at and improve:
santiycr (Santiago Suarez Ordoñez) · GitHub nameko-cachetools (
GitHub - santiycr/nameko-cachetools: A few tools to cache interactions between your nameko services, increasing resiliency and performance at the expense of consistency, when it makes sense. )
The idea is to increase resiliency in our infrastructure by caching
specific services that tend to be read-only and could become
single-point-of-failure for the rest of the infra.
Here's the gist of it:
from nameko.rpc import rpc from nameko_cachetools import CachedRpcProxy
class Service(object): name = "demo" other_service =
CachedRpcProxy('other_service' , failover_timeout=3) @rpc def
do_something(self, request): # this rpc response will be cached, further
queries will be # timed and cached values will be returned if no response
is # received in 3 seconds or an exception is raised at the # destination
service other_service.get_value('hi')
I did build two different strategies: cache on issues (use the cache if
something goes wrong at the destination), and cache first (use the cache
before even talking to the destination). Please share your thoughts and
feedback!
Santi @ Blameless
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/ topic/nameko-dev/gvgr-tB5MW4/
unsubscribe (
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe ).
To unsubscribe from this group and all its topics, send an email to nameko-dev+unsubscribe@
googlegroups.com ( nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https://groups.google.com/d/ msgid/nameko-dev/bc3ecae0-
a571-40ae-b967-857f41bd6755% 40googlegroups.com (
https://groups.google.com/d/msgid/nameko-dev/bc3ecae0-a571-40ae-b967-857f41bd6755%40googlegroups.com?utm_medium=email&utm_source=footer
).
For more options, visit https://groups.google.com/d/ optout (
https://groups.google.com/d/optout ).
--
You received this message because you are subscribed to the Google Groups
"nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to nameko-dev+unsubscribe@ googlegroups.com (
nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https://groups.google.com/d/ msgid/nameko-dev/
CAEqTAZajuwDwsUgRqBgL% 2BFRuaAP% 2B9LCjgprMpHgLo1f3hjBaZg% 40mail.gmail.com
(
https://groups.google.com/d/msgid/nameko-dev/CAEqTAZajuwDwsUgRqBgL%2BFRuaAP%2B9LCjgprMpHgLo1f3hjBaZg%40mail.gmail.com?utm_medium=email&utm_source=footer
).
For more options, visit https://groups.google.com/d/ optout (
https://groups.google.com/d/optout ).
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/ topic/nameko-dev/gvgr-tB5MW4/
unsubscribe (
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe ).
To unsubscribe from this group and all its topics, send an email to nameko-dev+unsubscribe@
googlegroups.com ( nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https://groups.google.com/d/ msgid/nameko-dev/a5b78318-
2097-41c8-b641-55617fd30694% 40googlegroups.com (
https://groups.google.com/d/msgid/nameko-dev/a5b78318-2097-41c8-b641-55617fd30694%40googlegroups.com?utm_medium=email&utm_source=footer
).
For more options, visit https://groups.google.com/d/ optout (
https://groups.google.com/d/optout ).
--
You received this message because you are subscribed to a topic in the
Google Groups "nameko-dev" group.
To unsubscribe from this topic, visit https:/ / groups. google. com/ d/ topic/
nameko-dev/ gvgr-tB5MW4/ unsubscribe (
https://groups.google.com/d/topic/nameko-dev/gvgr-tB5MW4/unsubscribe ).
To unsubscribe from this group and all its topics, send an email to nameko-dev+unsubscribe@
googlegroups. com ( nameko-dev+unsubscribe@googlegroups.com ).
To post to this group, send email to nameko-dev@ googlegroups. com (
nameko-dev@googlegroups.com ).
To view this discussion on the web, visit https:/ / groups. google. com/ d/
msgid/ nameko-dev/ 16a24ad9-5830-4c09-b010-3183b8ba58ad%40googlegroups. com
(
https://groups.google.com/d/msgid/nameko-dev/16a24ad9-5830-4c09-b010-3183b8ba58ad%40googlegroups.com?utm_medium=email&utm_source=footer
).
For more options, visit https:/ / groups. google. com/ d/ optout (
https://groups.google.com/d/optout ).