Uh oh!
There was an error while loading. Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork 34k
bpo-39812: Remove daemon threads in concurrent.futures#19149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uh oh!
There was an error while loading. Please reload this page.
Conversation
aeros commented Mar 25, 2020 • edited by bedevere-bot
Loading Uh oh!
There was an error while loading. Please reload this page.
edited by bedevere-bot
Uh oh!
There was an error while loading. Please reload this page.
| _threading_atexits = None | ||
| def _register_atexit(func, *arg, **kwargs): | ||
| """CPython internal: register *func* to be called before joining threads. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I specified "CPython internal" instead of just "internal" to make it more clear that it's intended to be used across stdlib modules (as opposed to an internal helper function for threading).
At some point, I think this could graduate to the public API, but at the moment it's a bit too niche since subinterpreters have not yet officially made it into the stdlib. If such a utility is requested by users, I think it could be moved without much hassle though. That's partly why I included a detailed docstring.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds good to me, thanks for the explanation.
| atexit.register(_python_exit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is just to make the atexit registration be in the same location for both ProcessPoolExecutor and ThreadPoolExecutor (right under the definition for _python_exit()). It can be removed if needed, but I think it will make them easier to locate for those reading over the implementation details.
aeros commented Mar 25, 2020
I considered adding a unit test for That being said, I think it would be worthwhile to add a test along those lines if it were to be added to the public API for |
| # Workers are created as daemon threads and processes. This is done to allow the | ||
| # interpreter to exit when there are still idle processes in a | ||
| # ProcessPoolExecutor's process pool (i.e. shutdown() was not called). However, | ||
| # allowing workers to die with the interpreter has two undesirable properties: | ||
| # - The workers would still be running during interpreter shutdown, | ||
| # meaning that they would fail in unpredictable ways. | ||
| # - The workers could be killed while evaluating a work item, which could | ||
| # be bad if the callable being evaluated has external side-effects e.g. | ||
| # writing to a file. | ||
| # | ||
| # To work around this problem, an exit handler is installed which tells the | ||
| # workers to exit when their work queues are empty and then waits until the | ||
| # threads/processes finish. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this section is no longer relevant now that the workers are no longer daemon threads. Correct me if I'm mistaken.
bedevere-bot commented Mar 25, 2020
aeros commented Mar 25, 2020
(Note that |
aeros commented Mar 25, 2020
I'm 99.99% certain the |
aeros commented Mar 25, 2020 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
I opened a separate issue to address the (unrelated) buildbot refleaks from |
pitrou left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing this. I think _register_atexit deserves at least some basic unit test(s). It should be fairly easy by spawning e.g. a subprocess and checking its output.
Uh oh!
There was an error while loading. Please reload this page.
| _threading_atexits = None | ||
| def _register_atexit(func, *arg, **kwargs): | ||
| """CPython internal: register *func* to be called before joining threads. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds good to me, thanks for the explanation.
Uh oh!
There was an error while loading. Please reload this page.
aeros commented Mar 25, 2020 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
No problem, thanks for the review. :-)
Yeah, after having some additional time to think about it, I think we'd be better off with a couple of basic unit tests for it, even if it's internal. I'll work on those next. |
aeros commented Mar 25, 2020
Going to restart the CI (via close and re-open) since it failed in the "Install Dependencies" stage. |
aeros commented Mar 25, 2020
@pitrou I believe that I've addressed the review comments. Let me know if the unit tests I added are sufficient. |
pitrou commented Mar 27, 2020
(closing and reopening to trigger CI) |
pitrou commented Mar 27, 2020
That's a good idea. |
pitrou commented Mar 27, 2020
Oh, and yes, the unit tests look sufficient for an internal function :-) |
aeros commented Mar 27, 2020 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
By the way, with GitHub actions for some reason, it leaves the previous CI failures and duplicates the tests when you close and re-open the PR. So some of the failures showing are from the first time (before I closed and re-opened it), that's why there's 3 of each GitHub actions test. I'm not certain why they're failing in the early stages though. If it happens again, I'll start a thread on python-dev.
Great! I'll work on the "What's New" entry next; it shouldn't take long. :-) (It should also clear the old CI tests since it's a different commit) |
| Removed daemon threads from :class:`~concurrent.futures.ThreadPoolExecutor` | ||
| and :class:`~concurrent.futures.ProcessPoolExecutor`. This improves |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a tilde ~ shortens the Sphinx markup link to appear as just the last part, I.E. ThreadPoolExecutor and ProcessPoolExecutor. I typically use this when the classes are easily distinguishable based on their names alone, and the section already makes it clear what module it's in. This helps to make it a bit more succinct.
pitrou left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing this @aeros !
| # Set this thread to be daemonized | ||
| super().__init__() | ||
| self.daemon = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually depending on this behavior and my code breaks in Python 3.9. Is there a way to set the threads to be daemon? The error I got:
Traceback (most recent call last): File "/Users/laike9m/.pyenv/versions/3.9.0/lib/python3.9/threading.py", line 950, in _bootstrap_inner self.run() File "/Users/laike9m/.pyenv/versions/3.9.0/lib/python3.9/threading.py", line 888, in run self._target(*self._args, **self._kwargs) File "/Users/laike9m/.pyenv/versions/[email protected]/lib/python3.9/site-packages/grpc/_server.py", line 875, in _serve if not _process_event_and_continue(state, event): File "/Users/laike9m/.pyenv/versions/[email protected]/lib/python3.9/site-packages/grpc/_server.py", line 839, in _process_event_and_continue rpc_state, rpc_future = _handle_call(event, state.generic_handlers, File "/Users/laike9m/.pyenv/versions/[email protected]/lib/python3.9/site-packages/grpc/_server.py", line 749, in _handle_call return _handle_with_method_handler(rpc_event, method_handler, File "/Users/laike9m/.pyenv/versions/[email protected]/lib/python3.9/site-packages/grpc/_server.py", line 722, in _handle_with_method_handler return state, _handle_unary_stream(rpc_event, state, File "/Users/laike9m/.pyenv/versions/[email protected]/lib/python3.9/site-packages/grpc/_server.py", line 640, in _handle_unary_stream return thread_pool.submit(_stream_response_in_pool, rpc_event, state, File "/Users/laike9m/.pyenv/versions/3.9.0/lib/python3.9/concurrent/futures/thread.py", line 163, in submit raise RuntimeError('cannot schedule new futures after ' RuntimeError: cannot schedule new futures after interpreter shutdown which I believe is relevant. @aeros
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@laike9m Hmm, does your code specifically rely on submitting new jobs afterexecutor.shutdown() gets called? If so, would it be possible to move the executor.shutdown() to a later point (optimally, you should be done with using the executor before calling shutdown())?
That being said, there may be some reason to consider allowing users to opt-in to using daemon threads; particularly since the decision to prevent subinterpreters from using daemon threads was reverted (which was originally the main motivation for this PR, with the benefit of more predictable shutdown behavior).
Do you have any thoughts @pitrou?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@laike9m What do you mean with "break"? Does the process not exit cleanly?
laike9mOct 26, 2020 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me describe my use case.
I'm using gRPC which starts a server using ThreadPoolExecutor:
https://github.com/laike9m/Cyberbrain/blob/master/cyberbrain/rpc_server.py#L235
This server runs in another thread besides the main thread, and would wait using server.wait_for_termination() until users explicitly ctrl-c. The main thread would run and finish the work, clients may or may not connect to the server before the main thread end, and the requirement is for the server to respond even after the main thread has finished. In Python <=3.8 this works, but in 3.9 I got the above error.
I'm not 100% that the root cause is this change, but I scanned through all changes in concurrent.futures in 3.9, and this seems to be the most likely one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, then I think you need to define a signal handler that will be called when Ctrl-C is pressed, and that will ask the server to stop gracefully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
emm, it's actually not about what happens when users press Ctrl+C, but the server is not able to serve requests before that...
server starts listening --> main thread finishes ---> requests come ------> Ctrl + C (problem is here) There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to come up with a minimal example that can reproduce this error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, then it's simpler: you should ask the server to finish and wait for it to finish before you let the main thread exit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, thanks.
aeros commented Feb 29, 2024
@gvanrossum this PR needs to be reverted, but I no longer have my commit bit |
gpshead commented Feb 29, 2024
Please open a new issue with reasoning. In general we do not want daemon threads anywhere anymore. Their existence has turned out to be a misfeature as they mean you cannot safely finalize a Python runtime when such threads are present. |
Bluehorn commented Mar 25, 2024
I am also curious why @aeros stated that this needs reverting. It looks like a backwards incompatible change (which is why I ended up reading this ticket), but it looks like affected projects can easily cope. Did run into this because I manipulate interpreter shutdown for a test suite, which at times got stuck due to some non-daemon thread "helpfully" started in some library package. That code is not used in production but for diagnostics, as the failure was intermittent and difficult to reproduce.
I think nobody "wants" units of execution that are just killed without notice in whatever state they are. I'd just like to advise against dropping them without a replacement. The use case I see is dealing with threads getting stuck in some native code. This came up in my career a few times and was always a giant PITA to solve (or, actually, work around). What I'd like is a way to detach a thread in a way that it may not come back from extension code and just silently stops. I am not sure if and how that could be implemented, though. Sorry for the noise, got longer than anticipated. |
pitrou commented Mar 25, 2024
gvanrossum commented Mar 25, 2024
The leading theory is that he lost control over his account. That account's permissions have been dropped. |
Removes daemon threads from the Executor internals in concurrent.futures to allow for compatibility with subinterpreters (since they no longer support daemon threads with https://bugs.python.org/issue37266).
/cc @pitrou
https://bugs.python.org/issue39812