First of all: thanks for a great lib! You really nailed it here guys.
I'm involved in a quite large project were nameko is one of the proposed micro services frameworks.
However we have a discussion atm regarding nameko in production. Since the resources and real world experiences are painfully few (or at least seldom shared) it's hard to find information about how people host and run nameko in production.
There's some discussion in this group regarding failure at startup etc. But it's more like a "hey this doesn't work the way I like" discussion rather than sharing experiences and examples.
So I'm interested in getting your input on how to do this. Docker? Supervisor? Nginx>gunicorn (when http)? Alpine?
I was involved in running nameko at onefinestay. We used supervisor (on
straight ec2, we started before docker was around and at least hadn't got
around to changing by the time i left). We made use of this and relied on
errors that weren't easily recoverable taking down the whole container (and
with it the python process), and let supervisor start the whole thing up
again (or detect errors on startup and eventually stop trying)
Though the runner can host multiple services in a single python process, we
ran each service in its own one. We also configured supervisor to first try
to stop services gracefully (via sigterm). Towards the end i think we had a
custom signal handler that first called `container.stop`, waited for a
while, and then called `container.kill` (which is still a bit more graceful
than supervisors fallback, which is sigkill
For web entrypoints we didn't use gunicorn, but we had nginx in front for
(some ssl termination), load balancing and protection against misbehaved
clients
We also had custom (nameko) extensions for sentry and logstash logging that
are mentioned elsewhere.
Matt is using nameko at his new place, in docker containers i believe, but
i'll let him fill in details. Jessie (and others) might also have helpful
experiences to share
Best,
David
···
On Friday, 21 October 2016 08:27:40 UTC+1, Johan Frisell wrote:
First of all: thanks for a great lib! You really nailed it here guys.
I'm involved in a quite large project were nameko is one of the proposed
micro services frameworks.
However we have a discussion atm regarding nameko in production. Since the
resources and real world experiences are painfully few (or at least seldom
shared) it's hard to find information about how people host and run nameko
in production.
There's some discussion in this group regarding failure at startup etc.
But it's more like a "hey this doesn't work the way I like" discussion
rather than sharing experiences and examples.
So I'm interested in getting your input on how to do this. Docker?
Supervisor? Nginx>gunicorn (when http)? Alpine?
The supervisor procedure is what I look into as well, and probably
somewhere close to where we will land.
Although I'm really interested in hearing Matts experience in the docker
usage. ATM I'm having some problems connecting to the rabbitmq container
from inside another container, however it works fine when I'm on the host.
So probably some configuration missing here. Changed the obvious such as
host etc, but possibly something else I'm missing out
At Student.com host each service in its own docker container. We are
putting together a reference application together over
at Orders by kooba · Pull Request #1 · nameko/nameko-examples · GitHub, and you can see
there exactly how we configure the Dockerfiles. Stopping the docker
container gracefully stops the services.
As David mentioned, Nameko intentionally allows difficult-to-recover errors
to bubble and kill the runner process. In production we use a restart
policy to bring crashed processes back up again.
We also use Nginx for SSL termination and otherwise rely on the Eventlet
WSGI server that runs inside the nameko process.
In terms of monitoring -- we use nameko-sentry · PyPI
to track and alert on entrypoint errors. We also have an "entrypoint
logging" extension (similar to David's "logstash logging") that sends trace
information to an elasticsearch cluster, capturing the call-stack,
arguments, results and exceptions generated by each entrypoint that fires.
Hopefully some other folks will add their experience on this thread too.
Please ask any other questions you have!
Matt.
···
On Friday, October 21, 2016 at 9:12:58 AM UTC+1, David Szotten wrote:
Hi Johan,
I was involved in running nameko at onefinestay. We used supervisor (on
straight ec2, we started before docker was around and at least hadn't got
around to changing by the time i left). We made use of this and relied on
errors that weren't easily recoverable taking down the whole container (and
with it the python process), and let supervisor start the whole thing up
again (or detect errors on startup and eventually stop trying)
Though the runner can host multiple services in a single python process,
we ran each service in its own one. We also configured supervisor to first
try to stop services gracefully (via sigterm). Towards the end i think we
had a custom signal handler that first called `container.stop`, waited for
a while, and then called `container.kill` (which is still a bit more
graceful than supervisors fallback, which is sigkill
For web entrypoints we didn't use gunicorn, but we had nginx in front for
(some ssl termination), load balancing and protection against misbehaved
clients
We also had custom (nameko) extensions for sentry and logstash logging
that are mentioned elsewhere.
Matt is using nameko at his new place, in docker containers i believe, but
i'll let him fill in details. Jessie (and others) might also have helpful
experiences to share
Best,
David
On Friday, 21 October 2016 08:27:40 UTC+1, Johan Frisell wrote:
First of all: thanks for a great lib! You really nailed it here guys.
I'm involved in a quite large project were nameko is one of the proposed
micro services frameworks.
However we have a discussion atm regarding nameko in production. Since
the resources and real world experiences are painfully few (or at least
seldom shared) it's hard to find information about how people host and run
nameko in production.
There's some discussion in this group regarding failure at startup etc.
But it's more like a "hey this doesn't work the way I like" discussion
rather than sharing experiences and examples.
So I'm interested in getting your input on how to do this. Docker?
Supervisor? Nginx>gunicorn (when http)? Alpine?
Hey guys, I would really appreciate your help with my question at "Making reply queues non-durable".
We're in production and things don't look good on the rabbit server in terms of io.
Please help with your insight.
Certainly some good input! The nameko-examples is a real valuable asset.
Hopefully I can provide a little more feedback once I get my hands dirty
I will also get back with feedback on our approach once we have decided.
But already I think we have shed some light in this area and provided some
real value for the community and hopefully lowered the barrier of entry
some.
Please don’t cross post like this. Someone will reply to your threads if
they have helpful input and when they get a chance. We can usually help, it
just all takes time.
D
···
On Friday, 21 October 2016 14:23:06 UTC+1, tsachi...@gmail.com wrote:
Hey guys, I would really appreciate your help with my question at "Making
reply queues non-durable".
We're in production and things don't look good on the rabbit server in
terms of io.
Please help with your insight.
Thanks for posting the example there. It's been a really helpful reference.
I'm in the process of deploying a small nameko app that follows your
example of exposing services via an HTTP API Gateway service. Aside from
the reasons you mentioned for using Nginx, is there any other benefit to
having it act as proxy, or can I simply let the API Gateway handle the
requests directly? Something like Gunicorn, for example, not necessary
here?
···
On Friday, October 21, 2016 at 7:01:27 AM UTC-4, Matt Yule-Bennett wrote:
Hi Johan,
At Student.com host each service in its own docker container. We are
putting together a reference application together over at Orders by kooba · Pull Request #1 · nameko/nameko-examples · GitHub, and you can see
there exactly how we configure the Dockerfiles. Stopping the docker
container gracefully stops the services.
As David mentioned, Nameko intentionally allows difficult-to-recover
errors to bubble and kill the runner process. In production we use a
restart policy to bring crashed processes back up again.
We also use Nginx for SSL termination and otherwise rely on the Eventlet
WSGI server that runs inside the nameko process.
In terms of monitoring -- we use nameko-sentry · PyPI to track and alert on
entrypoint errors. We also have an "entrypoint logging" extension (similar
to David's "logstash logging") that sends trace information to an
elasticsearch cluster, capturing the call-stack, arguments, results and
exceptions generated by each entrypoint that fires.
Hopefully some other folks will add their experience on this thread too.
Please ask any other questions you have!
Matt.
On Friday, October 21, 2016 at 9:12:58 AM UTC+1, David Szotten wrote:
Hi Johan,
I was involved in running nameko at onefinestay. We used supervisor (on
straight ec2, we started before docker was around and at least hadn't got
around to changing by the time i left). We made use of this and relied on
errors that weren't easily recoverable taking down the whole container (and
with it the python process), and let supervisor start the whole thing up
again (or detect errors on startup and eventually stop trying)
Though the runner can host multiple services in a single python process,
we ran each service in its own one. We also configured supervisor to first
try to stop services gracefully (via sigterm). Towards the end i think we
had a custom signal handler that first called `container.stop`, waited for
a while, and then called `container.kill` (which is still a bit more
graceful than supervisors fallback, which is sigkill
For web entrypoints we didn't use gunicorn, but we had nginx in front for
(some ssl termination), load balancing and protection against misbehaved
clients
We also had custom (nameko) extensions for sentry and logstash logging
that are mentioned elsewhere.
Matt is using nameko at his new place, in docker containers i believe,
but i'll let him fill in details. Jessie (and others) might also have
helpful experiences to share
Best,
David
On Friday, 21 October 2016 08:27:40 UTC+1, Johan Frisell wrote:
First of all: thanks for a great lib! You really nailed it here guys.
I'm involved in a quite large project were nameko is one of the proposed
micro services frameworks.
However we have a discussion atm regarding nameko in production. Since
the resources and real world experiences are painfully few (or at least
seldom shared) it's hard to find information about how people host and run
nameko in production.
There's some discussion in this group regarding failure at startup etc.
But it's more like a "hey this doesn't work the way I like" discussion
rather than sharing experiences and examples.
So I'm interested in getting your input on how to do this. Docker?
Supervisor? Nginx>gunicorn (when http)? Alpine?
Thanks for posting the example there. It's been a really helpful
reference.
I'm in the process of deploying a small nameko app that follows your
example of exposing services via an HTTP API Gateway service. Aside from
the reasons you mentioned for using Nginx, is there any other benefit to
having it act as proxy, or can I simply let the API Gateway handle the
requests directly? Something like Gunicorn, for example, not necessary
here?
That's correct. Nameko uses eventlet's web server, which is pretty good.
Being event-based it doesn't have the single-threaded limitation of, for
example, the django and flask development servers.
Nginx is useful because it's so full-featured, but if you don't need
proxying or URL re-writing or anything like that, you can serve nameko
directly.
···
On Friday, February 17, 2017 at 9:20:31 PM UTC, jsl...@gmail.com wrote:
On Friday, October 21, 2016 at 7:01:27 AM UTC-4, Matt Yule-Bennett wrote:
Hi Johan,
At Student.com host each service in its own docker container. We are
putting together a reference application together over at Orders by kooba · Pull Request #1 · nameko/nameko-examples · GitHub, and you can see
there exactly how we configure the Dockerfiles. Stopping the docker
container gracefully stops the services.
As David mentioned, Nameko intentionally allows difficult-to-recover
errors to bubble and kill the runner process. In production we use a
restart policy to bring crashed processes back up again.
We also use Nginx for SSL termination and otherwise rely on the Eventlet
WSGI server that runs inside the nameko process.
In terms of monitoring -- we use nameko-sentry · PyPI to track and alert on
entrypoint errors. We also have an "entrypoint logging" extension (similar
to David's "logstash logging") that sends trace information to an
elasticsearch cluster, capturing the call-stack, arguments, results and
exceptions generated by each entrypoint that fires.
Hopefully some other folks will add their experience on this thread too.
Please ask any other questions you have!
Matt.
On Friday, October 21, 2016 at 9:12:58 AM UTC+1, David Szotten wrote:
Hi Johan,
I was involved in running nameko at onefinestay. We used supervisor (on
straight ec2, we started before docker was around and at least hadn't got
around to changing by the time i left). We made use of this and relied on
errors that weren't easily recoverable taking down the whole container (and
with it the python process), and let supervisor start the whole thing up
again (or detect errors on startup and eventually stop trying)
Though the runner can host multiple services in a single python process,
we ran each service in its own one. We also configured supervisor to first
try to stop services gracefully (via sigterm). Towards the end i think we
had a custom signal handler that first called `container.stop`, waited for
a while, and then called `container.kill` (which is still a bit more
graceful than supervisors fallback, which is sigkill
For web entrypoints we didn't use gunicorn, but we had nginx in front
for (some ssl termination), load balancing and protection against
misbehaved clients
We also had custom (nameko) extensions for sentry and logstash logging
that are mentioned elsewhere.
Matt is using nameko at his new place, in docker containers i believe,
but i'll let him fill in details. Jessie (and others) might also have
helpful experiences to share
Best,
David
On Friday, 21 October 2016 08:27:40 UTC+1, Johan Frisell wrote:
First of all: thanks for a great lib! You really nailed it here guys.
I'm involved in a quite large project were nameko is one of the
proposed micro services frameworks.
However we have a discussion atm regarding nameko in production. Since
the resources and real world experiences are painfully few (or at least
seldom shared) it's hard to find information about how people host and run
nameko in production.
There's some discussion in this group regarding failure at startup etc.
But it's more like a "hey this doesn't work the way I like" discussion
rather than sharing experiences and examples.
So I'm interested in getting your input on how to do this. Docker?
Supervisor? Nginx>gunicorn (when http)? Alpine?
Ah yes, that makes sense.
Thanks for sharing, and for the great work on nameko!
···
On Sat, Feb 18, 2017 at 5:59 AM, Matt Yule-Bennett < bennett.matthew@gmail.com> wrote:
Hi,
On Friday, February 17, 2017 at 9:20:31 PM UTC, jsl...@gmail.com wrote:
Hi Matt,
Thanks for posting the example there. It's been a really helpful
reference.
I'm in the process of deploying a small nameko app that follows your
example of exposing services via an HTTP API Gateway service. Aside from
the reasons you mentioned for using Nginx, is there any other benefit to
having it act as proxy, or can I simply let the API Gateway handle the
requests directly? Something like Gunicorn, for example, not necessary
here?
That's correct. Nameko uses eventlet's web server, which is pretty good.
Being event-based it doesn't have the single-threaded limitation of, for
example, the django and flask development servers.
Nginx is useful because it's so full-featured, but if you don't need
proxying or URL re-writing or anything like that, you can serve nameko
directly.
On Friday, October 21, 2016 at 7:01:27 AM UTC-4, Matt Yule-Bennett wrote:
Hi Johan,
At Student.com host each service in its own docker container. We are
putting together a reference application together over at Orders by kooba · Pull Request #1 · nameko/nameko-examples · GitHub, and you can see
there exactly how we configure the Dockerfiles. Stopping the docker
container gracefully stops the services.
As David mentioned, Nameko intentionally allows difficult-to-recover
errors to bubble and kill the runner process. In production we use a
restart policy to bring crashed processes back up again.
We also use Nginx for SSL termination and otherwise rely on the Eventlet
WSGI server that runs inside the nameko process.
In terms of monitoring -- we use n · PyPI
ameko-sentry to track and alert on entrypoint errors. We also have an
"entrypoint logging" extension (similar to David's "logstash logging") that
sends trace information to an elasticsearch cluster, capturing the
call-stack, arguments, results and exceptions generated by each entrypoint
that fires.
Hopefully some other folks will add their experience on this thread too.
Please ask any other questions you have!
Matt.
On Friday, October 21, 2016 at 9:12:58 AM UTC+1, David Szotten wrote:
Hi Johan,
I was involved in running nameko at onefinestay. We used supervisor (on
straight ec2, we started before docker was around and at least hadn't got
around to changing by the time i left). We made use of this and relied on
errors that weren't easily recoverable taking down the whole container (and
with it the python process), and let supervisor start the whole thing up
again (or detect errors on startup and eventually stop trying)
Though the runner can host multiple services in a single python
process, we ran each service in its own one. We also configured supervisor
to first try to stop services gracefully (via sigterm). Towards the end i
think we had a custom signal handler that first called `container.stop`,
waited for a while, and then called `container.kill` (which is still a bit
more graceful than supervisors fallback, which is sigkill
For web entrypoints we didn't use gunicorn, but we had nginx in front
for (some ssl termination), load balancing and protection against
misbehaved clients
We also had custom (nameko) extensions for sentry and logstash logging
that are mentioned elsewhere.
Matt is using nameko at his new place, in docker containers i believe,
but i'll let him fill in details. Jessie (and others) might also have
helpful experiences to share
Best,
David
On Friday, 21 October 2016 08:27:40 UTC+1, Johan Frisell wrote:
First of all: thanks for a great lib! You really nailed it here guys.
I'm involved in a quite large project were nameko is one of the
proposed micro services frameworks.
However we have a discussion atm regarding nameko in production. Since
the resources and real world experiences are painfully few (or at least
seldom shared) it's hard to find information about how people host and run
nameko in production.
There's some discussion in this group regarding failure at startup
etc. But it's more like a "hey this doesn't work the way I like" discussion
rather than sharing experiences and examples.
So I'm interested in getting your input on how to do this. Docker?
Supervisor? Nginx>gunicorn (when http)? Alpine?
We ended up using supervisor together with Nameko, no nginx involved in the
actual microservices. So far everything works great
···
Den lördag 18 februari 2017 kl. 14:23:22 UTC+1 skrev Jonathan Sleeuw:
Ah yes, that makes sense.
Thanks for sharing, and for the great work on nameko!
On Sat, Feb 18, 2017 at 5:59 AM, Matt Yule-Bennett <bennett...@gmail.com > <javascript:>> wrote:
Hi,
On Friday, February 17, 2017 at 9:20:31 PM UTC, jsl...@gmail.com wrote:
Hi Matt,
Thanks for posting the example there. It's been a really helpful
reference.
I'm in the process of deploying a small nameko app that follows your
example of exposing services via an HTTP API Gateway service. Aside from
the reasons you mentioned for using Nginx, is there any other benefit to
having it act as proxy, or can I simply let the API Gateway handle the
requests directly? Something like Gunicorn, for example, not necessary
here?
That's correct. Nameko uses eventlet's web server, which is pretty good.
Being event-based it doesn't have the single-threaded limitation of, for
example, the django and flask development servers.
Nginx is useful because it's so full-featured, but if you don't need
proxying or URL re-writing or anything like that, you can serve nameko
directly.
On Friday, October 21, 2016 at 7:01:27 AM UTC-4, Matt Yule-Bennett wrote:
Hi Johan,
At Student.com host each service in its own docker container. We are
putting together a reference application together over at Orders by kooba · Pull Request #1 · nameko/nameko-examples · GitHub, and you can
see there exactly how we configure the Dockerfiles. Stopping the docker
container gracefully stops the services.
As David mentioned, Nameko intentionally allows difficult-to-recover
errors to bubble and kill the runner process. In production we use a
restart policy to bring crashed processes back up again.
We also use Nginx for SSL termination and otherwise rely on the
Eventlet WSGI server that runs inside the nameko process.
In terms of monitoring -- we use nameko-sentry · PyPI to track and alert on
entrypoint errors. We also have an "entrypoint logging" extension (similar
to David's "logstash logging") that sends trace information to an
elasticsearch cluster, capturing the call-stack, arguments, results and
exceptions generated by each entrypoint that fires.
Hopefully some other folks will add their experience on this thread
too. Please ask any other questions you have!
Matt.
On Friday, October 21, 2016 at 9:12:58 AM UTC+1, David Szotten wrote:
Hi Johan,
I was involved in running nameko at onefinestay. We used supervisor
(on straight ec2, we started before docker was around and at least hadn't
got around to changing by the time i left). We made use of this and relied
on errors that weren't easily recoverable taking down the whole container
(and with it the python process), and let supervisor start the whole thing
up again (or detect errors on startup and eventually stop trying)
Though the runner can host multiple services in a single python
process, we ran each service in its own one. We also configured supervisor
to first try to stop services gracefully (via sigterm). Towards the end i
think we had a custom signal handler that first called `container.stop`,
waited for a while, and then called `container.kill` (which is still a bit
more graceful than supervisors fallback, which is sigkill
For web entrypoints we didn't use gunicorn, but we had nginx in front
for (some ssl termination), load balancing and protection against
misbehaved clients
We also had custom (nameko) extensions for sentry and logstash logging
that are mentioned elsewhere.
Matt is using nameko at his new place, in docker containers i believe,
but i'll let him fill in details. Jessie (and others) might also have
helpful experiences to share
Best,
David
On Friday, 21 October 2016 08:27:40 UTC+1, Johan Frisell wrote:
First of all: thanks for a great lib! You really nailed it here guys.
I'm involved in a quite large project were nameko is one of the
proposed micro services frameworks.
However we have a discussion atm regarding nameko in production.
Since the resources and real world experiences are painfully few (or at
least seldom shared) it's hard to find information about how people host
and run nameko in production.
There's some discussion in this group regarding failure at startup
etc. But it's more like a "hey this doesn't work the way I like" discussion
rather than sharing experiences and examples.
So I'm interested in getting your input on how to do this. Docker?
Supervisor? Nginx>gunicorn (when http)? Alpine?