Advice for Jinja2 templates in Nameko service


I have a shared ‘config’ service, that makes use of Jinja2 templates to render call-specific configurations.

The templates themselves are very simple - just YAML with variables injected at render from the RPC request payload.

It works really well, allowing me to deploy changes to configs without deploying changes to other service code.

The only strange behavior, is that over time (a day or so) the service response times start to drop. If I reload the config service, it’s back to full speed.

I’m not convinced it’s a Nameko problem per-say, but it may be related to how I run Jinja.

I instantiate the Jinja2 enviroment outside the Nameko service class:

jinja = Environment(loader=FileSystemLoader(TEMPLATES), trim_blocks=True, lstrip_blocks=True)

and I render the templates at runtime:

class Config:
    name = os.environ['APP_NAME']
    sentry = SentryReporter()
    catalog = RpcProxy('io.catalog')

    def render_template(self, template_name, **kwargs):
        :param template_name: Path to template file
        :param kwargs: Arguments to hand to the template
        :return: rendered template or False

        We want bad templates to fail semi-gracefully.
            return yaml.load(jinja.get_template(template_name).render(**kwargs))
        except Exception as e:
            msg = dict(error=str(e), template=template_name, book_id=kwargs.get('book_id'))
            logger.error(redact(json.dumps(msg, indent=2)))
            return {}

    def book(self, book_id):
        details = self.details(book_id)
        return self.render_template('book.jinja2', detail=details)

    def details(self, book_id):
        return self.catalog.details(book_id)

I was wondering if anyone had any thoughts/experience with Jinja2+Nameko, and can see anything wrong or missing from my setup.

Many thanks in advance,


That is strange behaviour. Are there any other metrics that change other than response time? (e.g. CPU load, memory usage)

One thing to sanity check – does it make a difference if you remove the SentryReporter? The Sentry client does clever stuff to capture breadcrumbs, I have previously seen it do funny things like leak memory due to circular references.

There was a regression along these lines in nameko sentry, but it has been fixed since v0.0.5.

Another thing to try would be profiling your render_template method, to see if it really is that which is slowing down. ought to help with that.

Just to follow up on my own issue - this was an issue with Jinja2, not Nameko.

I increased the template cache with cache_size=200 (which is now the default apparently) and stopped the auto-reload with auto_reload=False (as I restart my services on changes). These 2 things shaved my HTTP response times in half, and the slow-down issue has gone away :slight_smile: