Best practice for passing around Nameko dependencies

Hi,

Sorry for the long question.

I'm pretty new to Nameko and still trying to understand best practices.
Specifically, I'd like to know the best way to access DependencyProviders
in an complex application. Most of the nameko examples are single script
examples, so accessing Dependencies is fairly simple. I'm curious what the
best practice is for accessing Dependencies if you have a larger, more
complex application.

For example, let's say you have two scripts: *service.py* and *logic.py*.
*service.py* contains are nameko service, and *logic.py* contains our
application logic. If I want to access dependencies from *service.py* in
*logic.py*, then I have to pass those dependencies explicitly (see below)

#service.py
from nameko_sqlalchemy import Session
from namek_redis import Redis
from custom_logger import Logger # <-- I have a custom logger
DependencyProvider that takes care of mdc logging for me
from nameko.dependency_providers import Config

from logic import handle_process
class Service(object):
    name = 'test'

    session = Session(Base)
    redis = Redis()
    config = Config()
    logger = Logger()

    @rpc
         def process(self, data):
         handle_process(self.session, self.redis, self.config, self.logger,
data)

#logic.py

Logic(object):
    @classmethod
    def handle_process(cls, session, redis, config, logger, data):
        DatabaseLogic.read_from_db(session, redis, config, logger, data)

DatabaseLogic(object):
    @classmethod
    def read_from_db(cls, session, redis, config, logger, data):
        if config.dev:
            try:
                redis.get(data):
            except:
                session.query(data)
            logger.info("Done reading from db")

As you can see, passing the dependencies around is pretty cumbersome, and
as you add more dependencies and more logic modules, it basically becomes
unreadable. The two possible fixes that I could imagine are:

1) Somehow you can access worker_ctx from any module (in the same you can
access a request from anywhere in a flask application):

from nameko.fake_module import current_worker_ctx

Logic(object):
    @classmethod
    def handle_process(cls, data):
        DatabaseLogic.read_from_db(data)

DatabaseLogic(object):
    @classmethod
    def read_from_db(cls, data):
        
        session = Session.get_dependency(current_worker_ctx)
                 redis = Redis.get_dependency(current_worker_ctx)
                 config = Config.get_dependency(current_worker_ctx)
                 logger = Logger.get_dependency(current_worker_ctx)

        if config.dev:
            try:
                redis.get(data):
            except:
                session.query(data)
            logger.info("Done reading from db")

2) Move away from DependencyProviders and create singleton classes that
allow you to access application objects anywhere in the application. For
example, my Database singleton might look like:
from sqlalchemy import create_engine
from sqlalchemy.orm import create_engine, sessionmaker

class _DatabaseManager(object):
    def __init__(self):
         self.engine = create_engine(...)
         self.sessionmaker = sessionmaker(...)
    
    @classmethod
    def get_session(cls):
        return scoped_session(self.sessionmaker) # scoped_session ensures
that the session is threadsafe

Database = _DatabaseManager()

This would allow me to call Database.get_session() from anywhere in my
application.

Any guidance here would be much appreciated because I'm sure I'm probably
thinking about something incorrectly!

Best,
Diego

What you're describing is a classic problem of dependency injection. Where
Flask opts for global objects and thread-local variables, Nameko chooses to
make the references explicit. It's a conscious choice that (we think) makes
testing much easier.

It doesn't prohibit complex applications though. It's hard to
comprehensively argue that without deeper examples, but for the snippet
you've shown, you could easily implement the "logic" as a dependency
provider. You'd pass in the config, and maintain connections to redis and
your relational database inside it.

Another pattern that I've used in the past is to maintain "contracts"
between dependency providers. All bound extensions have access to the
container, and the worker context contains a reference to the current
"service instance" (with dependencies already injected), so it's relatively
easy to access one dependency (the provider or the injection) from another.

Dependency injection generally encourages a flattening of logic. You *can*
make objects that must be constructed with dependencies, but as you say, it
gets messy. I generally find that pushing logic into dependency providers
themselves, or splitting service classes into mixins (or indeed independent
services) is better. Globals and singletons are definitely not "the Nameko
way" and will fight with the generally encouraged test patterns
<http://nameko.readthedocs.io/en/stable/testing.html&gt;\.

If you haven't already done so you might want to check out the (work in
progress) sample app <https://github.com/nameko/nameko-examples&gt;\.

Hope that helps.

Matt.

···

On Tuesday, April 18, 2017 at 8:02:40 PM UTC+1, di...@clearmetal.com wrote:

Hi,

Sorry for the long question.

I'm pretty new to Nameko and still trying to understand best practices.
Specifically, I'd like to know the best way to access DependencyProviders
in an complex application. Most of the nameko examples are single script
examples, so accessing Dependencies is fairly simple. I'm curious what the
best practice is for accessing Dependencies if you have a larger, more
complex application.

For example, let's say you have two scripts: *service.py* and *logic.py*.
*service.py* contains are nameko service, and *logic.py* contains our
application logic. If I want to access dependencies from *service.py* in
*logic.py*, then I have to pass those dependencies explicitly (see below)

#service.py
from nameko_sqlalchemy import Session
from namek_redis import Redis
from custom_logger import Logger # <-- I have a custom logger
DependencyProvider that takes care of mdc logging for me
from nameko.dependency_providers import Config

from logic import handle_process
class Service(object):
    name = 'test'

    session = Session(Base)
    redis = Redis()
    config = Config()
    logger = Logger()

    @rpc
         def process(self, data):
         handle_process(self.session, self.redis, self.config,
self.logger, data)

#logic.py

Logic(object):
    @classmethod
    def handle_process(cls, session, redis, config, logger, data):
        DatabaseLogic.read_from_db(session, redis, config, logger, data)

DatabaseLogic(object):
    @classmethod
    def read_from_db(cls, session, redis, config, logger, data):
        if config.dev:
            try:
                redis.get(data):
            except:
                session.query(data)
            logger.info("Done reading from db")

As you can see, passing the dependencies around is pretty cumbersome, and
as you add more dependencies and more logic modules, it basically becomes
unreadable. The two possible fixes that I could imagine are:

1) Somehow you can access worker_ctx from any module (in the same you can
access a request from anywhere in a flask application):

from nameko.fake_module import current_worker_ctx

Logic(object):
    @classmethod
    def handle_process(cls, data):
        DatabaseLogic.read_from_db(data)

DatabaseLogic(object):
    @classmethod
    def read_from_db(cls, data):
        
        session = Session.get_dependency(current_worker_ctx)
                 redis = Redis.get_dependency(current_worker_ctx)
                 config = Config.get_dependency(current_worker_ctx)
                 logger = Logger.get_dependency(current_worker_ctx)

        if config.dev:
            try:
                redis.get(data):
            except:
                session.query(data)
            logger.info("Done reading from db")

2) Move away from DependencyProviders and create singleton classes that
allow you to access application objects anywhere in the application. For
example, my Database singleton might look like:
from sqlalchemy import create_engine
from sqlalchemy.orm import create_engine, sessionmaker

class _DatabaseManager(object):
    def __init__(self):
         self.engine = create_engine(...)
         self.sessionmaker = sessionmaker(...)
    
    @classmethod
    def get_session(cls):
        return scoped_session(self.sessionmaker) # scoped_session ensures
that the session is threadsafe

Database = _DatabaseManager()

This would allow me to call Database.get_session() from anywhere in my
application.

Any guidance here would be much appreciated because I'm sure I'm probably
thinking about something incorrectly!

Best,
Diego

Hey Matt,

Thanks for the quick response.

I have a question regarding your suggestion "*you could easily implement
the "logic" as a dependency provider.*" First, I'm not entirely clear what
this dependency provider would look like. Would it be talking to other
DependencyProviders (nameko_sqlalchemy, nameko_redis) to get access to
sessions and the cache (i.e. maintaining contracts as you stated before)?
Or would it be maintaining its own connections. Second, in this
<Redirecting to Google Groups; thread
(which seems like a similar use case), David Szotten mentions that
"Dependencies are meant for interactions with the outside world, and not
really for business logic." So my initial inclination would be not to
implement this as a DependencyProvider...but I could be wrong here.

I think you're right that the correct way of addressing this would be to
separate logic.py into it's own service.

Thanks for your help!

Best,
Diego

···

On Tuesday, April 18, 2017 at 3:20:35 PM UTC-7, Matt Yule-Bennett wrote:

What you're describing is a classic problem of dependency injection. Where
Flask opts for global objects and thread-local variables, Nameko chooses to
make the references explicit. It's a conscious choice that (we think) makes
testing much easier.

It doesn't prohibit complex applications though. It's hard to
comprehensively argue that without deeper examples, but for the snippet
you've shown, you could easily implement the "logic" as a dependency
provider. You'd pass in the config, and maintain connections to redis and
your relational database inside it.

Another pattern that I've used in the past is to maintain "contracts"
between dependency providers. All bound extensions have access to the
container, and the worker context contains a reference to the current
"service instance" (with dependencies already injected), so it's relatively
easy to access one dependency (the provider or the injection) from another.

Dependency injection generally encourages a flattening of logic. You *can*
make objects that must be constructed with dependencies, but as you say, it
gets messy. I generally find that pushing logic into dependency providers
themselves, or splitting service classes into mixins (or indeed independent
services) is better. Globals and singletons are definitely not "the Nameko
way" and will fight with the generally encouraged test patterns
<http://nameko.readthedocs.io/en/stable/testing.html&gt;\.

If you haven't already done so you might want to check out the (work in
progress) sample app <https://github.com/nameko/nameko-examples&gt;\.

Hope that helps.

Matt.

On Tuesday, April 18, 2017 at 8:02:40 PM UTC+1, di...@clearmetal.com > wrote:

Hi,

Sorry for the long question.

I'm pretty new to Nameko and still trying to understand best practices.
Specifically, I'd like to know the best way to access DependencyProviders
in an complex application. Most of the nameko examples are single script
examples, so accessing Dependencies is fairly simple. I'm curious what the
best practice is for accessing Dependencies if you have a larger, more
complex application.

For example, let's say you have two scripts: *service.py* and *logic.py*.
*service.py* contains are nameko service, and *logic.py* contains our
application logic. If I want to access dependencies from *service.py*
in *logic.py*, then I have to pass those dependencies explicitly (see
below)

#service.py
from nameko_sqlalchemy import Session
from namek_redis import Redis
from custom_logger import Logger # <-- I have a custom logger
DependencyProvider that takes care of mdc logging for me
from nameko.dependency_providers import Config

from logic import handle_process
class Service(object):
    name = 'test'

    session = Session(Base)
    redis = Redis()
    config = Config()
    logger = Logger()

    @rpc
         def process(self, data):
         handle_process(self.session, self.redis, self.config,
self.logger, data)

#logic.py

Logic(object):
    @classmethod
    def handle_process(cls, session, redis, config, logger, data):
        DatabaseLogic.read_from_db(session, redis, config, logger, data)

DatabaseLogic(object):
    @classmethod
    def read_from_db(cls, session, redis, config, logger, data):
        if config.dev:
            try:
                redis.get(data):
            except:
                session.query(data)
            logger.info("Done reading from db")

As you can see, passing the dependencies around is pretty cumbersome, and
as you add more dependencies and more logic modules, it basically becomes
unreadable. The two possible fixes that I could imagine are:

1) Somehow you can access worker_ctx from any module (in the same you can
access a request from anywhere in a flask application):

from nameko.fake_module import current_worker_ctx

Logic(object):
    @classmethod
    def handle_process(cls, data):
        DatabaseLogic.read_from_db(data)

DatabaseLogic(object):
    @classmethod
    def read_from_db(cls, data):
        
        session = Session.get_dependency(current_worker_ctx)
                 redis = Redis.get_dependency(current_worker_ctx)
                 config = Config.get_dependency(current_worker_ctx)
                 logger = Logger.get_dependency(current_worker_ctx)

        if config.dev:
            try:
                redis.get(data):
            except:
                session.query(data)
            logger.info("Done reading from db")

2) Move away from DependencyProviders and create singleton classes that
allow you to access application objects anywhere in the application. For
example, my Database singleton might look like:
from sqlalchemy import create_engine
from sqlalchemy.orm import create_engine, sessionmaker

class _DatabaseManager(object):
    def __init__(self):
         self.engine = create_engine(...)
         self.sessionmaker = sessionmaker(...)
    
    @classmethod
    def get_session(cls):
        return scoped_session(self.sessionmaker) # scoped_session ensures
that the session is threadsafe

Database = _DatabaseManager()

This would allow me to call Database.get_session() from anywhere in my
application.

Any guidance here would be much appreciated because I'm sure I'm probably
thinking about something incorrectly!

Best,
Diego

Hey Matt,

Thanks for the quick response.

I have a question regarding your suggestion "*you could easily implement
the "logic" as a dependency provider.*" First, I'm not entirely clear
what this dependency provider would look like. Would it be talking to
other DependencyProviders (nameko_sqlalchemy, nameko_redis) to get access
to sessions and the cache (i.e. maintaining contracts as you stated
before)? Or would it be maintaining its own connections.

For this specific example (trying to fetch from redis and falling back to a
relational database) I would have the DependencyProvider own both
connections, rather than using "contracts". It could work either way
though, and I'm obviously not seeing the full picture here.

I'd also try to implement the MDC logging as a python logging handler
<https://docs.python.org/3/library/logging.handlers.html&gt;, to avoid having
that in my service class. (As a bonus, it would also then work outside the
service context).

Second, in this
<Redirecting to Google Groups; thread
(which seems like a similar use case), David Szotten mentions that
"Dependencies are meant for interactions with the outside world, and not
really for business logic." So my initial inclination would be not to
implement this as a DependencyProvider...but I could be wrong here.

David's statement is correct, dependencies are meant for interaction with
the outside world and not business logic. I would argue that in this case
though, your "logic.py" isn't really business logic at all. It's an
abstraction that adds a nice interface on top of two storage engines.

The main reason for using dependency injection is to make testing service
code easier. Presenting a nice clean interface to the service code (e.g.
`persist_data(some_data)`), even if there is some complexity behind that
interface, makes it easy to reason about (and test) what the service code
should be doing. You can separately test the complexity of the
DependencyProvider.

Again, take a look at the example app. The Products service' Storage
dependency returns a logical wrapper
<https://github.com/nameko/nameko-examples/blob/master/products/products/dependencies.py#L12&gt;
around the bare redis connection. The wrapper does contain some logic, but
all related to persistence and only to simplify the interface presented to
the service code.

Note that the "StorageWrapper" is basically the same pattern as David
suggested to Conor (i.e. a bare class that receives the client in its
constructor), just in a difference namespace.

I think you're right that the correct way of addressing this would be to
separate logic.py into it's own service.

Perhaps, although be careful of writing "utility" services, because they
introduce a lot of coupling. A service that handles all database writes for
other services will be very tightly bound, which negates the point of
splitting things into services in the first place!

···

On Wednesday, April 19, 2017 at 2:21:36 AM UTC+1, di...@clearmetal.com wrote:

Thanks for your help!

Best,
Diego

On Tuesday, April 18, 2017 at 3:20:35 PM UTC-7, Matt Yule-Bennett wrote:

What you're describing is a classic problem of dependency injection.
Where Flask opts for global objects and thread-local variables, Nameko
chooses to make the references explicit. It's a conscious choice that (we
think) makes testing much easier.

It doesn't prohibit complex applications though. It's hard to
comprehensively argue that without deeper examples, but for the snippet
you've shown, you could easily implement the "logic" as a dependency
provider. You'd pass in the config, and maintain connections to redis and
your relational database inside it.

Another pattern that I've used in the past is to maintain "contracts"
between dependency providers. All bound extensions have access to the
container, and the worker context contains a reference to the current
"service instance" (with dependencies already injected), so it's relatively
easy to access one dependency (the provider or the injection) from another.

Dependency injection generally encourages a flattening of logic. You
*can* make objects that must be constructed with dependencies, but as
you say, it gets messy. I generally find that pushing logic into dependency
providers themselves, or splitting service classes into mixins (or indeed
independent services) is better. Globals and singletons are definitely not
"the Nameko way" and will fight with the generally encouraged test
patterns <http://nameko.readthedocs.io/en/stable/testing.html&gt;\.

If you haven't already done so you might want to check out the (work in
progress) sample app <https://github.com/nameko/nameko-examples&gt;\.

Hope that helps.

Matt.

On Tuesday, April 18, 2017 at 8:02:40 PM UTC+1, di...@clearmetal.com >> wrote:

Hi,

Sorry for the long question.

I'm pretty new to Nameko and still trying to understand best practices.
Specifically, I'd like to know the best way to access DependencyProviders
in an complex application. Most of the nameko examples are single script
examples, so accessing Dependencies is fairly simple. I'm curious what the
best practice is for accessing Dependencies if you have a larger, more
complex application.

For example, let's say you have two scripts: *service.py* and *logic.py*.
*service.py* contains are nameko service, and *logic.py* contains our
application logic. If I want to access dependencies from *service.py*
in *logic.py*, then I have to pass those dependencies explicitly (see
below)

#service.py
from nameko_sqlalchemy import Session
from namek_redis import Redis
from custom_logger import Logger # <-- I have a custom logger
DependencyProvider that takes care of mdc logging for me
from nameko.dependency_providers import Config

from logic import handle_process
class Service(object):
    name = 'test'

    session = Session(Base)
    redis = Redis()
    config = Config()
    logger = Logger()

    @rpc
         def process(self, data):
         handle_process(self.session, self.redis, self.config,
self.logger, data)

#logic.py

Logic(object):
    @classmethod
    def handle_process(cls, session, redis, config, logger, data):
        DatabaseLogic.read_from_db(session, redis, config, logger, data)

DatabaseLogic(object):
    @classmethod
    def read_from_db(cls, session, redis, config, logger, data):
        if config.dev:
            try:
                redis.get(data):
            except:
                session.query(data)
            logger.info("Done reading from db")

As you can see, passing the dependencies around is pretty cumbersome,
and as you add more dependencies and more logic modules, it basically
becomes unreadable. The two possible fixes that I could imagine are:

1) Somehow you can access worker_ctx from any module (in the same you
can access a request from anywhere in a flask application):

from nameko.fake_module import current_worker_ctx

Logic(object):
    @classmethod
    def handle_process(cls, data):
        DatabaseLogic.read_from_db(data)

DatabaseLogic(object):
    @classmethod
    def read_from_db(cls, data):
        
        session = Session.get_dependency(current_worker_ctx)
                 redis = Redis.get_dependency(current_worker_ctx)
                 config = Config.get_dependency(current_worker_ctx)
                 logger = Logger.get_dependency(current_worker_ctx)

        if config.dev:
            try:
                redis.get(data):
            except:
                session.query(data)
            logger.info("Done reading from db")

2) Move away from DependencyProviders and create singleton classes that
allow you to access application objects anywhere in the application. For
example, my Database singleton might look like:
from sqlalchemy import create_engine
from sqlalchemy.orm import create_engine, sessionmaker

class _DatabaseManager(object):
    def __init__(self):
         self.engine = create_engine(...)
         self.sessionmaker = sessionmaker(...)
    
    @classmethod
    def get_session(cls):
        return scoped_session(self.sessionmaker) # scoped_session
ensures that the session is threadsafe

Database = _DatabaseManager()

This would allow me to call Database.get_session() from anywhere in my
application.

Any guidance here would be much appreciated because I'm sure I'm
probably thinking about something incorrectly!

Best,
Diego