Uh oh!
There was an error while loading. Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork 34k
gh-134584: Eliminate redundant refcounting from _UNPACK_SEQUENCE_TWO_TUPLE#142952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uh oh!
There was an error while loading. Please reload this page.
Conversation
Fidget-Spinner commented Dec 18, 2025
I need to discuss with Mark for this one. As it might cause a stack overflow if the stack bounds are exceeded. |
UNPACK_SEQUENCE familymarkshannon commented Dec 23, 2025
@cocolato Thanks for the contribution. Unfortunately, I don't think that this transformation is going to be useful. There is potentially an optimization we can do with If we know that the reference to the tuple in |
cocolato commented Dec 23, 2025
Thanks for the explanation! I now see that it doesn’t reduce refcount overhead for the common case of consuming new references (return values), so I’ll convert this PR to a draft. I might dig into how the Tier 2 optimizer could track such uniqueness. |
Fidget-Spinner commented Jan 1, 2026
@cocolato please reduce this PR to just |
UNPACK_SEQUENCE family_UNPACK_SEQUENCE_TWO_TUPLEcocolato commented Jan 2, 2026 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
I have reduced this PR to |
Uh oh!
There was an error while loading. Please reload this page.
Fidget-Spinner commented Jan 2, 2026
Cool! I think you can skip those for now and wait till we can do what Mark suggested. Let me know if you need any help understanding the implementation. |
Fidget-Spinner commented Jan 2, 2026
Hmm thinking about this further, I think Mark is right. I'm really sorry for asking you to reduce the PR! We need to track unique references to get the most benefit. |
cocolato commented Jan 4, 2026 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
Hi, I want to confirm: if we implement "Unique Reference Tracking", should we track only the current cpython/Include/internal/pycore_optimizer.h Lines 196 to 204 in 9609574
|
cocolato commented Jan 4, 2026
Or should we open a new issue? |
Fidget-Spinner commented Jan 4, 2026
@cocolato open a new issue for this please, thank you. You can start small with just tuple ops for now, then we can expand in the future. |
Fidget-Spinner commented Jan 4, 2026
|
Uh oh!
There was an error while loading. Please reload this page.