Skip to content

Conversation

@GarrettWu
Copy link
Contributor

Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:

  • Make sure to open an issue as a bug/issue before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea
  • Ensure the tests and linter pass
  • Code coverage does not decrease (if any source code was changed)
  • Appropriate docs were updated (if necessary)

Fixes b/462105877

@GarrettWuGarrettWu self-assigned this Nov 20, 2025
@GarrettWuGarrettWu requested review from a team as code ownersNovember 20, 2025 00:20
@product-auto-labelproduct-auto-labelbot added size: s Pull request size is small. api: bigquery Issues related to the googleapis/python-bigquery-dataframes API. labels Nov 20, 2025
results.append(joined_df_train[columns])
results.append(joined_df_test[columns])
results.append(joined_df_train[columns].cache())
results.append(joined_df_test[columns].cache())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a lot of .cache() calls. I think where the caching ideally happens is actually inside the block.split method. This way, the ordering is locked in, but only a single table is cached total, which should be a lot faster.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can put the caches outside of the loop, which removes some queries. But otherwise (caching anywhere in block.split) it doesn't work. Do you have an insight why is it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, really? would expect caching anywhere around this area:

block, string_ordering_col=block.apply_unary_op(
ordering_col, ops.AsTypeOp(to_type=bigframes.dtypes.STRING_DTYPE)
)
# Apply hash method to sum col and order by it.
block, string_sum_col=block.apply_binary_op(
string_ordering_col, random_state_col, ops.strconcat_op
)
block, hash_string_sum_col=block.apply_unary_op(string_sum_col, ops.hash_op)
block=block.order_by(
[ordering.OrderingExpression(ex.deref(hash_string_sum_col))]
)
would work ok (ideally at the end of this block).

Copy link
ContributorAuthor

@GarrettWuGarrettWuNov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, no matter where, within the block.split, it won't work. Only do a cache() to the end results would help.

screen/6A2RFRNf9m96Qvo

Would it be a bug in some deeper code?

Sign up for freeto join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: bigqueryIssues related to the googleapis/python-bigquery-dataframes API.size: sPull request size is small.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

@GarrettWu@TrevorBergeron