Hi,
Seems like a runtime error in a dependency stops nameko (you can "raise"
somethind in a get_dependency() method to test it).
This is a very BAD scenario.
So, other than try..except the code block, is there a way to prevent nameko
from stopping?
Thanks.
Hi,
`get_dependency` is responsible for creating the dependency to be injected
into the service. if this fails, the service won't have its dependency
(which it "depends on") and the implication is that the service can't run
correctly, so the container terminates.
what would be your preferred outcome? the service running without that
particular dependency? perhaps you could provide some more details around
your particular use-case to better help me understand
Best,
David
···
On Thursday, 7 April 2016 12:45:53 UTC+1, tsachi...@gmail.com wrote:
Hi,
Seems like a runtime error in a dependency stops nameko (you can "raise"
somethind in a get_dependency() method to test it).
This is a very BAD scenario.
So, other than try..except the code block, is there a way to prevent
nameko from stopping?
Thanks.
Hi David,
Thanks for the prompt reply.
There is no special use case. It happened to me while my Mongo DB threw an
exception (the get_dependency returns a Mongo DB connection object).
I agree that these kind of operations (IO, etc) should be try..except'ed,
but still, since nameko is a core service which other components rely upon,
as far as I'm concerned it should "never fail".
Imagine that Tornado/Apache/nginx/RabbitMQ/etc simply stop due to an
internal error. This is unacceptable.
IMHO, if a dependency throws an error, the current worker context should
quit, but the service should still be up and running.
This error maybe temporary (e.g. timeout, malformed input data to the
worker->injected dependency, etc), so why fail completely?
Don't you agree?
···
On Thursday, April 7, 2016 at 3:05:51 PM UTC+3, David Szotten wrote:
Hi,
`get_dependency` is responsible for creating the dependency to be injected
into the service. if this fails, the service won't have its dependency
(which it "depends on") and the implication is that the service can't run
correctly, so the container terminates.
what would be your preferred outcome? the service running without that
particular dependency? perhaps you could provide some more details around
your particular use-case to better help me understand
Best,
David
On Thursday, 7 April 2016 12:45:53 UTC+1, tsachi...@gmail.com wrote:
Hi,
Seems like a runtime error in a dependency stops nameko (you can "raise"
somethind in a get_dependency() method to test it).
This is a very BAD scenario.
So, other than try..except the code block, is there a way to prevent
nameko from stopping?
Thanks.
Hi,
Nameko can't tell the difference between e.g. connection error
(recoverable) and say configuration error (e.g. bad mongo hostname, not
recoverable). What was the nature of the mongo exception in your case? if
you occasionally fail to connect due to temporary errors, you might want to
add retry behaviour inside your dependency.
We tended to find that we get better mileage from running nameko using a
supervisor process (onefinestay uses supervisord), and letting that restart
the whole process on that type of error
D
···
On Thursday, 7 April 2016 13:17:12 UTC+1, tsachi...@gmail.com wrote:
Hi David,
Thanks for the prompt reply.
There is no special use case. It happened to me while my Mongo DB threw an
exception (the get_dependency returns a Mongo DB connection object).
I agree that these kind of operations (IO, etc) should be try..except'ed,
but still, since nameko is a core service which other components rely upon,
as far as I'm concerned it should "never fail".
Imagine that Tornado/Apache/nginx/RabbitMQ/etc simply stop due to an
internal error. This is unacceptable.
IMHO, if a dependency throws an error, the current worker context should
quit, but the service should still be up and running.
This error maybe temporary (e.g. timeout, malformed input data to the
worker->injected dependency, etc), so why fail completely?
Don't you agree?
On Thursday, April 7, 2016 at 3:05:51 PM UTC+3, David Szotten wrote:
Hi,
`get_dependency` is responsible for creating the dependency to be
injected into the service. if this fails, the service won't have its
dependency (which it "depends on") and the implication is that the service
can't run correctly, so the container terminates.
what would be your preferred outcome? the service running without that
particular dependency? perhaps you could provide some more details around
your particular use-case to better help me understand
Best,
David
On Thursday, 7 April 2016 12:45:53 UTC+1, tsachi...@gmail.com wrote:
Hi,
Seems like a runtime error in a dependency stops nameko (you can "raise"
somethind in a get_dependency() method to test it).
This is a very BAD scenario.
So, other than try..except the code block, is there a way to prevent
nameko from stopping?
Thanks.
This behaviour is intentional, but I can understand why it’s confusing.
As David explained, Nameko can’t tell whether errors are transient or
fatal, so it does the “safe” thing and assumes the error is unrecoverable.
The only time this isn’t true is during the worker lifecycle (i.e. when
executing a service method) because an error there can be attributed to the
active worker.
There is a valid argument that errors specifically in `get_dependency` are
also tied to a particular worker. It might be more intuitive to catch them
and return an appropriate error to the caller (if there is one). The caller
could then retry if appropriate, and the service would stay up in the mean
time. This also doesn’t preclude a DependencyProvider from catching
exceptions and retrying itself, if it wanted to hide transient errors from
the caller.
However, it wouldn't change the fact that users should “expect failures”
and run Nameko under a process manager though. The Nameko process will
still exit, for example, if an entrypoint can’t serialise the result of a
service method and hasn’t been made appropriately robust. Unlike
`get_dependency` there’s nothing we can do in Nameko core to protect
against that kind of error.
What do you think David?
···
On Thursday, April 7, 2016 at 1:30:42 PM UTC+1, David Szotten wrote:
Hi,
Nameko can't tell the difference between e.g. connection error
(recoverable) and say configuration error (e.g. bad mongo hostname, not
recoverable). What was the nature of the mongo exception in your case? if
you occasionally fail to connect due to temporary errors, you might want to
add retry behaviour inside your dependency.
We tended to find that we get better mileage from running nameko using a
supervisor process (onefinestay uses supervisord), and letting that restart
the whole process on that type of error
D
On Thursday, 7 April 2016 13:17:12 UTC+1, tsachi...@gmail.com wrote:
Hi David,
Thanks for the prompt reply.
There is no special use case. It happened to me while my Mongo DB threw
an exception (the get_dependency returns a Mongo DB connection object).
I agree that these kind of operations (IO, etc) should be try..except'ed,
but still, since nameko is a core service which other components rely upon,
as far as I'm concerned it should "never fail".
Imagine that Tornado/Apache/nginx/RabbitMQ/etc simply stop due to an
internal error. This is unacceptable.
IMHO, if a dependency throws an error, the current worker context should
quit, but the service should still be up and running.
This error maybe temporary (e.g. timeout, malformed input data to the
worker->injected dependency, etc), so why fail completely?
Don't you agree?
On Thursday, April 7, 2016 at 3:05:51 PM UTC+3, David Szotten wrote:
Hi,
`get_dependency` is responsible for creating the dependency to be
injected into the service. if this fails, the service won't have its
dependency (which it "depends on") and the implication is that the service
can't run correctly, so the container terminates.
what would be your preferred outcome? the service running without that
particular dependency? perhaps you could provide some more details around
your particular use-case to better help me understand
Best,
David
On Thursday, 7 April 2016 12:45:53 UTC+1, tsachi...@gmail.com wrote:
Hi,
Seems like a runtime error in a dependency stops nameko (you can
"raise" somethind in a get_dependency() method to test it).
This is a very BAD scenario.
So, other than try..except the code block, is there a way to prevent
nameko from stopping?
Thanks.
Hi,
Describing the behaviour and our reasons above, the existing behaviour does
feel a bit unsatisfactory.
Rather than crashing, it might make sense for calls to return some kind of
"server error" response (a la 500 errors in http).
While nameko can't "protect" against dependencies failing or unserialisable
responses, we could do something other than crash the process. It doesn't
seem unreasonable to me to expect the runner to survive such things and
"use a supervisor" (as was my original advice) seems like a bit of a cop
out.
I haven't thought about all implications (e.g. would "extension errors" ack
or nack amqp messages? does each entrypoint decide?) nor about how
difficult this would be to implement, nor about backwards compat. but the
general idea might be something we want to think about
best,
d
···
On Sunday, 10 April 2016 21:26:22 UTC+1, Matt Bennett wrote:
This behaviour is intentional, but I can understand why it’s confusing.
As David explained, Nameko can’t tell whether errors are transient or
fatal, so it does the “safe” thing and assumes the error is unrecoverable.
The only time this isn’t true is during the worker lifecycle (i.e. when
executing a service method) because an error there can be attributed to the
active worker.
There is a valid argument that errors specifically in `get_dependency` are
also tied to a particular worker. It might be more intuitive to catch them
and return an appropriate error to the caller (if there is one). The caller
could then retry if appropriate, and the service would stay up in the mean
time. This also doesn’t preclude a DependencyProvider from catching
exceptions and retrying itself, if it wanted to hide transient errors from
the caller.
However, it wouldn't change the fact that users should “expect failures”
and run Nameko under a process manager though. The Nameko process will
still exit, for example, if an entrypoint can’t serialise the result of a
service method and hasn’t been made appropriately robust. Unlike
`get_dependency` there’s nothing we can do in Nameko core to protect
against that kind of error.
What do you think David?
On Thursday, April 7, 2016 at 1:30:42 PM UTC+1, David Szotten wrote:
Hi,
Nameko can't tell the difference between e.g. connection error
(recoverable) and say configuration error (e.g. bad mongo hostname, not
recoverable). What was the nature of the mongo exception in your case? if
you occasionally fail to connect due to temporary errors, you might want to
add retry behaviour inside your dependency.
We tended to find that we get better mileage from running nameko using a
supervisor process (onefinestay uses supervisord), and letting that restart
the whole process on that type of error
D
On Thursday, 7 April 2016 13:17:12 UTC+1, tsachi...@gmail.com wrote:
Hi David,
Thanks for the prompt reply.
There is no special use case. It happened to me while my Mongo DB threw
an exception (the get_dependency returns a Mongo DB connection object).
I agree that these kind of operations (IO, etc) should be
try..except'ed, but still, since nameko is a core service which other
components rely upon, as far as I'm concerned it should "never fail".
Imagine that Tornado/Apache/nginx/RabbitMQ/etc simply stop due to an
internal error. This is unacceptable.
IMHO, if a dependency throws an error, the current worker context should
quit, but the service should still be up and running.
This error maybe temporary (e.g. timeout, malformed input data to the
worker->injected dependency, etc), so why fail completely?
Don't you agree?
On Thursday, April 7, 2016 at 3:05:51 PM UTC+3, David Szotten wrote:
Hi,
`get_dependency` is responsible for creating the dependency to be
injected into the service. if this fails, the service won't have its
dependency (which it "depends on") and the implication is that the service
can't run correctly, so the container terminates.
what would be your preferred outcome? the service running without that
particular dependency? perhaps you could provide some more details around
your particular use-case to better help me understand
Best,
David
On Thursday, 7 April 2016 12:45:53 UTC+1, tsachi...@gmail.com wrote:
Hi,
Seems like a runtime error in a dependency stops nameko (you can
"raise" somethind in a get_dependency() method to test it).
This is a very BAD scenario.
So, other than try..except the code block, is there a way to prevent
nameko from stopping?
Thanks.
Hi,
I agree with you guys that there is no alternative to good coding
practices, exception handling, input sanitation and process monitoring.
Having said that, from the discussion so far, it looks like crashing the
server is almost by design!? "if there's an error in X then nameko will
*crash*"
Of course IO operations in the dependency will generally be try..except'd,
but imagine a calculation block which happens to throw a division by zero.
Or maybe I'm relying on an external package (openCV, PIL, whatever) that
threw an error due to bad arguments passed to one of its functions.
Should the service crash in such cases?
IMO, nameko should catch these errors, raise exception or return an 'error'
response, but never crash.
Of course I am not involved in this project, so I can't tell if it's doable
(conceptually, practically, etc) and I don't know the impact of
implementing it.
Anyway, thank you guys for discussing this subject. Would love to hear your
thoughts and any updates.
Best
T
···
On Monday, April 11, 2016 at 11:51:33 PM UTC+3, David Szotten wrote:
Hi,
Describing the behaviour and our reasons above, the existing behaviour
does feel a bit unsatisfactory.
Rather than crashing, it might make sense for calls to return some kind of
"server error" response (a la 500 errors in http).
While nameko can't "protect" against dependencies failing or
unserialisable responses, we could do something other than crash the
process. It doesn't seem unreasonable to me to expect the runner to survive
such things and "use a supervisor" (as was my original advice) seems like a
bit of a cop out.
I haven't thought about all implications (e.g. would "extension errors"
ack or nack amqp messages? does each entrypoint decide?) nor about how
difficult this would be to implement, nor about backwards compat. but the
general idea might be something we want to think about
best,
d
On Sunday, 10 April 2016 21:26:22 UTC+1, Matt Bennett wrote:
This behaviour is intentional, but I can understand why it’s confusing.
As David explained, Nameko can’t tell whether errors are transient or
fatal, so it does the “safe” thing and assumes the error is unrecoverable.
The only time this isn’t true is during the worker lifecycle (i.e. when
executing a service method) because an error there can be attributed to the
active worker.
There is a valid argument that errors specifically in `get_dependency`
are also tied to a particular worker. It might be more intuitive to catch
them and return an appropriate error to the caller (if there is one). The
caller could then retry if appropriate, and the service would stay up in
the mean time. This also doesn’t preclude a DependencyProvider from
catching exceptions and retrying itself, if it wanted to hide transient
errors from the caller.
However, it wouldn't change the fact that users should “expect failures”
and run Nameko under a process manager though. The Nameko process will
still exit, for example, if an entrypoint can’t serialise the result of a
service method and hasn’t been made appropriately robust. Unlike
`get_dependency` there’s nothing we can do in Nameko core to protect
against that kind of error.
What do you think David?
On Thursday, April 7, 2016 at 1:30:42 PM UTC+1, David Szotten wrote:
Hi,
Nameko can't tell the difference between e.g. connection error
(recoverable) and say configuration error (e.g. bad mongo hostname, not
recoverable). What was the nature of the mongo exception in your case? if
you occasionally fail to connect due to temporary errors, you might want to
add retry behaviour inside your dependency.
We tended to find that we get better mileage from running nameko using a
supervisor process (onefinestay uses supervisord), and letting that restart
the whole process on that type of error
D
On Thursday, 7 April 2016 13:17:12 UTC+1, tsachi...@gmail.com wrote:
Hi David,
Thanks for the prompt reply.
There is no special use case. It happened to me while my Mongo DB threw
an exception (the get_dependency returns a Mongo DB connection object).
I agree that these kind of operations (IO, etc) should be
try..except'ed, but still, since nameko is a core service which other
components rely upon, as far as I'm concerned it should "never fail".
Imagine that Tornado/Apache/nginx/RabbitMQ/etc simply stop due to an
internal error. This is unacceptable.
IMHO, if a dependency throws an error, the current worker context
should quit, but the service should still be up and running.
This error maybe temporary (e.g. timeout, malformed input data to the
worker->injected dependency, etc), so why fail completely?
Don't you agree?
On Thursday, April 7, 2016 at 3:05:51 PM UTC+3, David Szotten wrote:
Hi,
`get_dependency` is responsible for creating the dependency to be
injected into the service. if this fails, the service won't have its
dependency (which it "depends on") and the implication is that the service
can't run correctly, so the container terminates.
what would be your preferred outcome? the service running without that
particular dependency? perhaps you could provide some more details around
your particular use-case to better help me understand
Best,
David
On Thursday, 7 April 2016 12:45:53 UTC+1, tsachi...@gmail.com wrote:
Hi,
Seems like a runtime error in a dependency stops nameko (you can
"raise" somethind in a get_dependency() method to test it).
This is a very BAD scenario.
So, other than try..except the code block, is there a way to prevent
nameko from stopping?
Thanks.
Just to be clear, the situation isn't quite that dire. Nameko makes a
distinction between service code and "framework/plumbing" (core +
extensions) code. Uncaught exceptions in service code _are_ caught by the
framework and result in error responses
This discussion is about the possibility of changing the behaviour of
exceptions in extensions and elsewhere, e.g. from get_dependency, and other
places that are tied to a worker lifecycle from the former to the latter
behaviour. For certain errors i do think crashing is the correct behaviour,
e.g. during a "setup" phase. E.g. if my app can't resolve the hostname of
my db server, i want it to crash at startup, not fail when serving its
first (and all subsequent) requests
d
···
On Tuesday, 12 April 2016 08:49:18 UTC+1, tsachi...@gmail.com wrote:
Hi,
I agree with you guys that there is no alternative to good coding
practices, exception handling, input sanitation and process monitoring.
Having said that, from the discussion so far, it looks like crashing the
server is almost by design!? "if there's an error in X then nameko will
*crash*"
Of course IO operations in the dependency will generally be try..except'd,
but imagine a calculation block which happens to throw a division by zero.
Or maybe I'm relying on an external package (openCV, PIL, whatever) that
threw an error due to bad arguments passed to one of its functions.
Should the service crash in such cases?
IMO, nameko should catch these errors, raise exception or return an
'error' response, but never crash.
Of course I am not involved in this project, so I can't tell if it's
doable (conceptually, practically, etc) and I don't know the impact of
implementing it.
Anyway, thank you guys for discussing this subject. Would love to hear
your thoughts and any updates.
Best
T
On Monday, April 11, 2016 at 11:51:33 PM UTC+3, David Szotten wrote:
Hi,
Describing the behaviour and our reasons above, the existing behaviour
does feel a bit unsatisfactory.
Rather than crashing, it might make sense for calls to return some kind
of "server error" response (a la 500 errors in http).
While nameko can't "protect" against dependencies failing or
unserialisable responses, we could do something other than crash the
process. It doesn't seem unreasonable to me to expect the runner to survive
such things and "use a supervisor" (as was my original advice) seems like a
bit of a cop out.
I haven't thought about all implications (e.g. would "extension errors"
ack or nack amqp messages? does each entrypoint decide?) nor about how
difficult this would be to implement, nor about backwards compat. but the
general idea might be something we want to think about
best,
d
On Sunday, 10 April 2016 21:26:22 UTC+1, Matt Bennett wrote:
This behaviour is intentional, but I can understand why it’s confusing.
As David explained, Nameko can’t tell whether errors are transient or
fatal, so it does the “safe” thing and assumes the error is unrecoverable.
The only time this isn’t true is during the worker lifecycle (i.e. when
executing a service method) because an error there can be attributed to the
active worker.
There is a valid argument that errors specifically in `get_dependency`
are also tied to a particular worker. It might be more intuitive to catch
them and return an appropriate error to the caller (if there is one). The
caller could then retry if appropriate, and the service would stay up in
the mean time. This also doesn’t preclude a DependencyProvider from
catching exceptions and retrying itself, if it wanted to hide transient
errors from the caller.
However, it wouldn't change the fact that users should “expect failures”
and run Nameko under a process manager though. The Nameko process will
still exit, for example, if an entrypoint can’t serialise the result of a
service method and hasn’t been made appropriately robust. Unlike
`get_dependency` there’s nothing we can do in Nameko core to protect
against that kind of error.
What do you think David?
On Thursday, April 7, 2016 at 1:30:42 PM UTC+1, David Szotten wrote:
Hi,
Nameko can't tell the difference between e.g. connection error
(recoverable) and say configuration error (e.g. bad mongo hostname, not
recoverable). What was the nature of the mongo exception in your case? if
you occasionally fail to connect due to temporary errors, you might want to
add retry behaviour inside your dependency.
We tended to find that we get better mileage from running nameko using
a supervisor process (onefinestay uses supervisord), and letting that
restart the whole process on that type of error
D
On Thursday, 7 April 2016 13:17:12 UTC+1, tsachi...@gmail.com wrote:
Hi David,
Thanks for the prompt reply.
There is no special use case. It happened to me while my Mongo DB
threw an exception (the get_dependency returns a Mongo DB connection
object).
I agree that these kind of operations (IO, etc) should be
try..except'ed, but still, since nameko is a core service which other
components rely upon, as far as I'm concerned it should "never fail".
Imagine that Tornado/Apache/nginx/RabbitMQ/etc simply stop due to an
internal error. This is unacceptable.
IMHO, if a dependency throws an error, the current worker context
should quit, but the service should still be up and running.
This error maybe temporary (e.g. timeout, malformed input data to the
worker->injected dependency, etc), so why fail completely?
Don't you agree?
On Thursday, April 7, 2016 at 3:05:51 PM UTC+3, David Szotten wrote:
Hi,
`get_dependency` is responsible for creating the dependency to be
injected into the service. if this fails, the service won't have its
dependency (which it "depends on") and the implication is that the service
can't run correctly, so the container terminates.
what would be your preferred outcome? the service running without
that particular dependency? perhaps you could provide some more details
around your particular use-case to better help me understand
Best,
David
On Thursday, 7 April 2016 12:45:53 UTC+1, tsachi...@gmail.com wrote:
Hi,
Seems like a runtime error in a dependency stops nameko (you can
"raise" somethind in a get_dependency() method to test it).
This is a very BAD scenario.
So, other than try..except the code block, is there a way to prevent
nameko from stopping?
Thanks.
I understand that crashing during a setup phase is unavoidable, and this is
totally acceptable.
This is why I was referring specifically to the get_dependency method which
is message-bound, and not service-bound.
···
On Tuesday, April 12, 2016 at 2:37:02 PM UTC+3, David Szotten wrote:
Just to be clear, the situation isn't quite that dire. Nameko makes a
distinction between service code and "framework/plumbing" (core +
extensions) code. Uncaught exceptions in service code _are_ caught by the
framework and result in error responses
This discussion is about the possibility of changing the behaviour of
exceptions in extensions and elsewhere, e.g. from get_dependency, and other
places that are tied to a worker lifecycle from the former to the latter
behaviour. For certain errors i do think crashing is the correct behaviour,
e.g. during a "setup" phase. E.g. if my app can't resolve the hostname of
my db server, i want it to crash at startup, not fail when serving its
first (and all subsequent) requests
d
On Tuesday, 12 April 2016 08:49:18 UTC+1, tsachi...@gmail.com wrote:
Hi,
I agree with you guys that there is no alternative to good coding
practices, exception handling, input sanitation and process monitoring.
Having said that, from the discussion so far, it looks like crashing the
server is almost by design!? "if there's an error in X then nameko will
*crash*"
Of course IO operations in the dependency will generally be
try..except'd, but imagine a calculation block which happens to throw a
division by zero. Or maybe I'm relying on an external package (openCV, PIL,
whatever) that threw an error due to bad arguments passed to one of its
functions.
Should the service crash in such cases?
IMO, nameko should catch these errors, raise exception or return an
'error' response, but never crash.
Of course I am not involved in this project, so I can't tell if it's
doable (conceptually, practically, etc) and I don't know the impact of
implementing it.
Anyway, thank you guys for discussing this subject. Would love to hear
your thoughts and any updates.
Best
T
On Monday, April 11, 2016 at 11:51:33 PM UTC+3, David Szotten wrote:
Hi,
Describing the behaviour and our reasons above, the existing behaviour
does feel a bit unsatisfactory.
Rather than crashing, it might make sense for calls to return some kind
of "server error" response (a la 500 errors in http).
While nameko can't "protect" against dependencies failing or
unserialisable responses, we could do something other than crash the
process. It doesn't seem unreasonable to me to expect the runner to survive
such things and "use a supervisor" (as was my original advice) seems like a
bit of a cop out.
I haven't thought about all implications (e.g. would "extension errors"
ack or nack amqp messages? does each entrypoint decide?) nor about how
difficult this would be to implement, nor about backwards compat. but the
general idea might be something we want to think about
best,
d
On Sunday, 10 April 2016 21:26:22 UTC+1, Matt Bennett wrote:
This behaviour is intentional, but I can understand why it’s confusing.
As David explained, Nameko can’t tell whether errors are transient or
fatal, so it does the “safe” thing and assumes the error is unrecoverable.
The only time this isn’t true is during the worker lifecycle (i.e. when
executing a service method) because an error there can be attributed to the
active worker.
There is a valid argument that errors specifically in `get_dependency`
are also tied to a particular worker. It might be more intuitive to catch
them and return an appropriate error to the caller (if there is one). The
caller could then retry if appropriate, and the service would stay up in
the mean time. This also doesn’t preclude a DependencyProvider from
catching exceptions and retrying itself, if it wanted to hide transient
errors from the caller.
However, it wouldn't change the fact that users should “expect
failures” and run Nameko under a process manager though. The Nameko process
will still exit, for example, if an entrypoint can’t serialise the result
of a service method and hasn’t been made appropriately robust. Unlike
`get_dependency` there’s nothing we can do in Nameko core to protect
against that kind of error.
What do you think David?
On Thursday, April 7, 2016 at 1:30:42 PM UTC+1, David Szotten wrote:
Hi,
Nameko can't tell the difference between e.g. connection error
(recoverable) and say configuration error (e.g. bad mongo hostname, not
recoverable). What was the nature of the mongo exception in your case? if
you occasionally fail to connect due to temporary errors, you might want to
add retry behaviour inside your dependency.
We tended to find that we get better mileage from running nameko using
a supervisor process (onefinestay uses supervisord), and letting that
restart the whole process on that type of error
D
On Thursday, 7 April 2016 13:17:12 UTC+1, tsachi...@gmail.com wrote:
Hi David,
Thanks for the prompt reply.
There is no special use case. It happened to me while my Mongo DB
threw an exception (the get_dependency returns a Mongo DB connection
object).
I agree that these kind of operations (IO, etc) should be
try..except'ed, but still, since nameko is a core service which other
components rely upon, as far as I'm concerned it should "never fail".
Imagine that Tornado/Apache/nginx/RabbitMQ/etc simply stop due to an
internal error. This is unacceptable.
IMHO, if a dependency throws an error, the current worker context
should quit, but the service should still be up and running.
This error maybe temporary (e.g. timeout, malformed input data to the
worker->injected dependency, etc), so why fail completely?
Don't you agree?
On Thursday, April 7, 2016 at 3:05:51 PM UTC+3, David Szotten wrote:
Hi,
`get_dependency` is responsible for creating the dependency to be
injected into the service. if this fails, the service won't have its
dependency (which it "depends on") and the implication is that the service
can't run correctly, so the container terminates.
what would be your preferred outcome? the service running without
that particular dependency? perhaps you could provide some more details
around your particular use-case to better help me understand
Best,
David
On Thursday, 7 April 2016 12:45:53 UTC+1, tsachi...@gmail.com wrote:
Hi,
Seems like a runtime error in a dependency stops nameko (you can
"raise" somethind in a get_dependency() method to test it).
This is a very BAD scenario.
So, other than try..except the code block, is there a way to
prevent nameko from stopping?
Thanks.