Hi, we also use DataDog here at Student.com, but we don't have yet DD
integrated with Nameko for collecting application metrics. We trace our
entrypoint using Nameko Tracer
<https://github.com/Overseas-Student-Living/nameko-tracer> and transport
and inspect them in ELK setup. We are also thinking of sending traces to DD
in addition to existing ELK solution.
I had a quick look on the Python DD tracer and it looks like there are
number of ways how to integrate it with Nameko services or to Nameko itself.
For causal tracing I would try the wrap decorator and wrap service
entrypoints directly, I suppose it would trace them on entry and on exit:
class Service:
@tracer.wrap()
@rpc
def say_hello(self):
pass
Another approach would be to go a bit deeper to Nameko framework and to
write a dependency provider which inspects each worker before and after
entrypoint execution and use ddtrace API to send traces to dd agent. Same
inspection is already done by Nameko Tracer
<https://github.com/Overseas-Student-Living/nameko-tracer/blob/master/nameko_tracer/dependency.py>
as Jakub pointed out in his reply. One of the features of Nameko Tracer is
that it separates the metrics collection from structuring, formatting and
transporting it to desired destination by using standard Python logging
mechanisms - loggers, handlers, formatters and filters. So the whole thing
is quite modular and can be configured by users the standard way. I would
prefer extending Nameko Tracer with a new ddtrace
<https://github.com/DataDog/dd-trace-py> loggin handler as it will make the
solutions nicely pluggable and users would have the ability to configure
their tracing by logging configuration they understand. So there is an
option to extend Nameko Tracer to include logging handler for communication
with DD agents using the ddtrace API or the other option would be to write
similar dependency provider from scratch. Various tracers have various
APIs, various ways of structuring their metrics and number of different
ways of transporting them to their destination.Not talking about the
visualisation part .) There is the Opentracing <http://opentracing.io/>
project which tries to solve this problem and which implementation would
also be a nice contribution to Nameko tracer.
With datadog extension of Nameko Tracer enabling DD tracing for all
entrypoints of a Nameko service may be just a config change:
# config.yaml
LOGGING:
version: 1
formatters:
tracer:
(): nameko_tracer.formatters.DataDogFormatter
handlers:
tracer:
class: nameko_tracer.handlers.DataDogHandler
formatter: tracer
loggers:
nameko_tracer:
level: INFO
handlers: [tracer]
There is yet another way how to collect Nameko entrypoint metrics for DD
agents which may be a bit controversial but which looks like the preferred
way of adopting DD tracing by various framework users. DDtrace has monkey
patches for many existing libraries and frameworks. These patch existing
libraries to be able to set tracers at any point of the lib's work
execution. That way framework users only import and run patching function
and all the magic is done behind the scene.
from ddtrace import patch_all
patch_all()
It is handy for users, but it's monkey patching with all its risks and
unpleasant surprises. As it does not use libs API, but rather lib
internals, it is much harder to maintain and requires deep understanding of
the patched lib. On the other hand, some of the most popular python
libraries are also based on monkey patching and on magics done behind the
scene (pytest, eventlet, gevent, ...).
I think that all of the three approaches are valid. We already had a chat
about writing the DD extensions to Nameko Tracer
<https://github.com/Overseas-Student-Living/nameko-tracer>\. The monkey
patching probably should be a PR to ddtrace/contrib
<https://github.com/DataDog/dd-trace-py/tree/master/ddtrace/contrib> .
Ondrej
···
On Monday, 5 March 2018 20:46:11 UTC, jackal wrote:
Hello all,
I would like to use datadog tracer with Nameko, and I couldn't find the
way to do it especially the code responsible for running the entrypoints (
worker_setup and worker_result doesn't do that )
have someone done this before/or does someone know how to do?