Skip to content

Conversation

@encukou
Copy link
Member

@encukouencukou commented Feb 21, 2025

If Py_IsFinalizing() is true, non-daemon threads (other than the current one) are done, and daemon threads are prevented from acquiring GIL (or thread state), so they cannot finalize themselves and become done. Joining them without timeout would block forever.

Raise PythonFinalizationError instead of hanging.

See gh-123940 for a real-world use case: calling join() from __del__.
Doing this is still ill-advised, but an exception should at least make the issue easier to diagnose.


📚 Documentation preview 📚: https://cpython-previews--130402.org.readthedocs.build/

…daemon thread If `Py_IsFinalizing()` is true, non-daemon threads (other than the current one) are done, and daemon threads are prevented from running, so they cannot finalize themselves and become done. Joining them without timeout would block forever. Raise PythonFinalizationError instead of hanging. See pythongh-123940 for a use case: calling `join()` from `__del__`. This is ill-advised, but an exception should at least make it easier to diagnose.
@colesbury
Copy link
Contributor

This seems like a good idea to me.

  • What does thread.is_alive() return?
  • I think the exception should not be conditional on not having a timeout specified. There's no way it can succeed, so we should just raise the exception immediately like we do when trying to join your own thread. You can also end up with threading.join(timeout=...) calls in an infinite loop.

@vstinner
Copy link
Member

I think the exception should not be conditional on not having a timeout specified

I agree with Sam.

@encukou
Copy link
MemberAuthor

What does thread.is_alive() return?

True. Threads that are already done can be joined normally.

I think the exception should not be conditional on not having a timeout specified.

In a finalizer, wouldn't it be OK to wait a bit for graceful termination (using join with a timeout), and then do some teardown regardless of whether the thread survived?
(If Python is being finalized, the thread would of course always survive -- but you might not be writing the code only for that case.)

Raising an exception would mean you skip that teardown, unless you have a try/except around join.

IOW, to me, the reasoning is not as clear-cut here as in the “hang the only Python thread that can run” case.

You can also end up with threading.join(timeout=...) calls in an infinite loop.

Well... You can even write an infinite loop without any join at all! :)
I guess I'm not trying to prevent hangs entirely, just make them easier to diagnose. IMO, a while thread.is_alive() loop is much easier to grok than an internal lock becoming un-acquirable.


I wrote the code to hang even with timeout; I'll update the PR if you still think that's the way to go.

@encukou
Copy link
MemberAuthor

Updated to raise even with timeout.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this code path taken by all threads, or only daemon threads?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's taken by the thread that called Py_FinalizeEx.
When Py_IsFinalizing is true, all other threads than the one that called Py_FinalizeEx are daemonic and they cannot call Python API (including ThreadHandle_join).
So, self must be a daemon thread.

@encukou
Copy link
MemberAuthor

I'll merge on ~Friday if there are no objections.

@encukou
Copy link
MemberAuthor

... And I went offline for a month after writing that.

I'm back now; merging.

@encukouencukou merged commit 4ebbfcf into python:mainApr 28, 2025
42 checks passed
@encukouencukou deleted the no-join-in-finalize branch April 28, 2025 13:48
Sign up for freeto join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

@encukou@colesbury@vstinner