*First of a spec of the setup I'm running:*
- AWS EC2 instances in the bottom layer.
- Kubernetes hosting Docker containers
- Clustered RabbitMQ (two servers) with an ELB in front.
- Redis
- Influx
- Nameko 2.6.0
*Problem:*
In short I'm experiencing memory allocation problems. Where the python
process running Nameko wont release the memory back to the container,
causing the container to reboot when it hits its memory limit (in my case
512MB). An easy way to reproduce this in my environment is to log some
random messages. The more I log the bigger the leak. However this is not in
any way a proof that the logging is the core problem, but maybe an
indication?
Below is a sample code I use to reproduce the problem. As you might notice
I have a circular event dispatch/handling, this is just for testing
purposes. My "real" code does not have this implementation but still
experience the same problem. Even with the most simple implementation, a
service with a simple logging.info("message") will experience a memory leak.
*service.py*
import logging
from nameko.timer import timer
from nameko.events import event_handler
from nameko.events import EventDispatcher
from service.redis_cli import RedisDependency
class Service:
name = "service"
dispatcher = EventDispatcher()
r_db = RedisDependency()
@timer(interval=5)
def log(self):
logging.info("This is a rather long log message?")
self.dispatcher("publish", {"message": "hello world"})
@event_handler("uc_sensor_pipeline", "new_reading")
def do_log(self, payload):
logging.info(payload)
@event_handler("service", "publish")
def handle_publish(self, payload):
self.r_db.lpush('logging', '123', payload)
logging.info(payload['message'])
*redis_cli.py*
from nameko.extensions import DependencyProvider
from redis import StrictRedis
class RedisWrapper:
def __init__(self, client):
self.client = client
def lpush(self, prefix, key, value):
full_key = self._format_key(prefix, key)
self.client.lpush(full_key, value)
self.client.ltrim(full_key, 0, 9)
def get(self, prefix, key):
full_key = self._format_key(prefix, key)
return self.client.lrange(full_key, 0, -1)
def _decode(self, item):
return { k.decode(): v.decode() for k, v in item.items() }
def _format_key(self, prefix, key):
return '{0}:{1}'.format(prefix, key)
class RedisDependency(DependencyProvider):
def setup(self):
self.client = StrictRedis.from_url(
self.container.config.get('REDIS_URI'))
def get_dependency(self, worker_ctx):
return RedisWrapper(self.client)
*config.yml*
AMQP_URI:
amqp://${RMQ_USER:guest}:${RMQ_PASSWORD:guest}@${RMQ_HOST:localhost}
REDIS_URI: redis://myredishost:6379
LOGGING:
version: 1
handlers:
console:
class: logging.StreamHandler
file:
class: logging.handlers.RotatingFileHandler
filename: service.log
maxBytes: 1048576
backupCount: 3
root:
level: INFO
handlers: [console]
*Overview of the mem allocation:*
I know this is not a huge increase but it follows a pattern and is steady
(will eventually cause the container to reboot due to hitting mem limit).
In containers with more load it would leak memory faster.
*What have I tried (without luck) so far:*
- Logging to file instead of stdout.
- Manually gc.collect() (as this doesn't release any memory I figure
that it's not a reference problem but rather that the actual process wont
release memory to the container once freed by the gc).
- Removed the dependency injection and calling redis from the service
itself.
- Decrease and increase number of workers.
- Simplify the service to just contain a timer, which logs a message
every fifth second (still memory increases)
I understand that this might be perceived as to little information to give
any valuable input. But I sincerely don't know how to proceed. Anyone got
any ideas or see anything really strange with the example? Anyone
experienced something similar?
If you came this far, thank you for reading and taking your time! Any help
would be very appreciated!
Disclaimer: I don't necessarily believe this has to do with Nameko in
particular but rather some combination of errors/dependencies/bad
implementation.