You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 10, 2025. It is now read-only.
While working on tracing, when I stop a worker I get this error during cleanup:
2022-10-26 14:58:43 [error ] Failed to detach context [opentelemetry.context]
Traceback (most recent call last):
File "/Users/israelhalle/Library/Caches/pypoetry/virtualenvs/saturn-engine-WvplgCTT-py3.9/lib/python3.9/site-packages/opentelemetry/trace/__init__.py", line 573, in use_span
yield span
File "/Users/israelhalle/Library/Caches/pypoetry/virtualenvs/saturn-engine-WvplgCTT-py3.9/lib/python3.9/site-packages/opentelemetry/sdk/trace/__init__.py", line 1033, in start_as_current_span
yield span_context
File "/Users/israelhalle/devel/saturn/src/saturn_engine/worker/services/tracing/tracer.py", line 37, in on_message_executed
results = yield
GeneratorExit
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/israelhalle/Library/Caches/pypoetry/virtualenvs/saturn-engine-WvplgCTT-py3.9/lib/python3.9/site-packages/opentelemetry/context/__init__.py", line 157, in detach
_RUNTIME_CONTEXT.detach(token) # type: ignore
File "/Users/israelhalle/Library/Caches/pypoetry/virtualenvs/saturn-engine-WvplgCTT-py3.9/lib/python3.9/site-packages/opentelemetry/context/contextvars_context.py", line 50, in detach
self._current_context.reset(token) # type: ignore
ValueError: <Token var=<ContextVar name='current_context' default={} at 0x1037375e0> at 0x10371b040> was created in a different Context
It seems like a the coroutine is being switch from contexts / tasks. Perhaps a bug from TaskGroup or the Scheduler? Would be nice to reproduce in a unittest and ensure that Coroutine remains in the same context / tasks during all their lifecycle.
While working on tracing, when I stop a worker I get this error during cleanup:
It seems like a the coroutine is being switch from contexts / tasks. Perhaps a bug from TaskGroup or the Scheduler? Would be nice to reproduce in a unittest and ensure that Coroutine remains in the same context / tasks during all their lifecycle.
Note that this only happen during cleanup.