Skip to content

Conversation

@colesbury
Copy link
Contributor

@colesburycolesbury commented Dec 18, 2025

The use of memmove and _Py_memory_repeat were not thread-safe in the free threading build in some cases. In theory, memmove and _Py_memory_repeat can copy byte-by-byte instead of pointer-by-pointer, so concurrent readers could see uninitialized data or tearing.

Additionally, we should be using "release" (or stronger) ordering to be compliant with the C11 memory model when copying objects within a list.

I've also updated list_resize so it definitively doesn't fail when shrinking a list. This simplifies some of the call sites.

The use of memmove and _Py_memory_repeat were not thread-safe in the free threading build in some cases. In theory, memmove and _Py_memory_repeat can copy byte-by-byte instead of pointer-by-pointer, so concurrent readers could see uninitialized data or tearing. Additionally, we should be using "release" (or stronger) ordering to be compliant with the C11 memory model when copying objects within a list.
@bedevere-bot
Copy link

🤖 New build scheduled with the buildbot fleet by @colesbury for commit 97e8d13 🤖

Results will be shown at:

https://buildbot.python.org/all/#/grid?branch=refs%2Fpull%2F142957%2Fmerge

If you want to schedule another build, you need to add the 🔨 test-with-refleak-buildbots label again.

@bedevere-botbedevere-bot removed the 🔨 test-with-refleak-buildbots Test PR w/ refleak buildbots; report in status section label Dec 18, 2025
Comment on lines 1094 to 1100
PyObject **src = items;
PyObject **dst = items + input_size;
Py_ssize_t remaining = output_size - input_size;
while (remaining > 0){
_Py_atomic_store_ptr_release(dst++, *src++);
remaining--;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't this just call list_atomic_memmove?

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that will simplify this a bit and avoid the special case here (since list_atomic_memmove has it internally).

goto Error;
}
Py_ssize_t tail = Py_SIZE(a) - ihigh;
list_atomic_memmove(a, &item[ihigh+d], &item[ihigh], tail);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is so much nicer, I wish we'd refactored this the first time round :P

// Pointer-by-pointer memmove for PyObject** arrays that is safe
// for shared lists in Py_GIL_DISABLED builds.
static void
list_atomic_memmove(PyListObject *a, PyObject **dest, PyObject **src, Py_ssize_t n)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a fan of the name (it's not the memmove as a whole that's atomic, just the individual writes) but I can't think of a better name and I think the comment explains it well enough.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's a bit misleading. I can name it ptr_wise_atomic_memmove (as in Byte-wise atomic memcpy)

@colesburycolesbury merged commit e46f28c into python:mainDec 19, 2025
44 checks passed
@colesburycolesbury deleted the gh-129069-list-memmove branch December 19, 2025 23:06
cocolato pushed a commit to cocolato/cpython that referenced this pull request Dec 22, 2025
…-142957) The use of memmove and _Py_memory_repeat were not thread-safe in the free threading build in some cases. In theory, memmove and _Py_memory_repeat can copy byte-by-byte instead of pointer-by-pointer, so concurrent readers could see uninitialized data or tearing. Additionally, we should be using "release" (or stronger) ordering to be compliant with the C11 memory model when copying objects within a list.
Sign up for freeto join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

@colesbury@bedevere-bot@Yhg1s