- Notifications
You must be signed in to change notification settings - Fork 578
feat(langgraph): Usage attributes on invocation spans#5211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uh oh!
There was an error while loading. Please reload this page.
Conversation
alexander-alderman-webb commented Dec 10, 2025 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
codecovbot commented Dec 10, 2025 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@## master #5211 +/- ## ========================================== + Coverage 84.17% 84.22% +0.04% ========================================== Files 181 181 Lines 18443 18486 +43 Branches 3283 3295 +12 ========================================== + Hits 15524 15569 +45 + Misses 1904 1899 -5 - Partials 1015 1018 +3
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Usage data miscounted when PII collection disabled
When should_send_default_pii() is false or include_prompts is false, input_messages remains None. This causes _get_new_messages in _set_response_attributes to return all output messages instead of just the new ones. Since LangGraph state accumulates messages, usage data will include tokens from all messages in the response rather than only the new messages added during this invocation. The input messages need to be parsed unconditionally (at least for _get_new_messages) to correctly calculate usage data regardless of PII settings.
sentry_sdk/integrations/langgraph.py#L184-L204
sentry-python/sentry_sdk/integrations/langgraph.py
Lines 184 to 204 in 4f3fab3
| # Store input messages to later compare with output | |
| input_messages=None | |
| if ( | |
| len(args) >0 | |
| andshould_send_default_pii() | |
| andintegration.include_prompts | |
| ): | |
| input_messages=_parse_langgraph_messages(args[0]) | |
| ifinput_messages: | |
| normalized_input_messages=normalize_message_roles(input_messages) | |
| scope=sentry_sdk.get_current_scope() | |
| messages_data=truncate_and_annotate_messages( | |
| normalized_input_messages, span, scope | |
| ) | |
| ifmessages_dataisnotNone: | |
| set_data_normalized( | |
| span, | |
| SPANDATA.GEN_AI_REQUEST_MESSAGES, | |
| messages_data, | |
| unpack=False, | |
| ) |
sentry_sdk/integrations/langgraph.py#L240-L260
sentry-python/sentry_sdk/integrations/langgraph.py
Lines 240 to 260 in 4f3fab3
| input_messages=None | |
| if ( | |
| len(args) >0 | |
| andshould_send_default_pii() | |
| andintegration.include_prompts | |
| ): | |
| input_messages=_parse_langgraph_messages(args[0]) | |
| ifinput_messages: | |
| normalized_input_messages=normalize_message_roles(input_messages) | |
| scope=sentry_sdk.get_current_scope() | |
| messages_data=truncate_and_annotate_messages( | |
| normalized_input_messages, span, scope | |
| ) | |
| ifmessages_dataisnotNone: | |
| set_data_normalized( | |
| span, | |
| SPANDATA.GEN_AI_REQUEST_MESSAGES, | |
| messages_data, | |
| unpack=False, | |
| ) |
Bug: Usage data miscounted when PII collection disabled
When should_send_default_pii() is false or include_prompts is false, input_messages remains None. This causes _get_new_messages in _set_response_attributes to return all output messages instead of just the new ones. Since LangGraph state accumulates messages, usage data will include tokens from all messages in the response rather than only the new messages added during this invocation. The input messages need to be parsed unconditionally (at least for _get_new_messages) to correctly calculate usage data regardless of PII settings.
sentry_sdk/integrations/langgraph.py#L184-L204
sentry-python/sentry_sdk/integrations/langgraph.py
Lines 184 to 204 in 4f3fab3
| # Store input messages to later compare with output | |
| input_messages=None | |
| if ( | |
| len(args) >0 | |
| andshould_send_default_pii() | |
| andintegration.include_prompts | |
| ): | |
| input_messages=_parse_langgraph_messages(args[0]) | |
| ifinput_messages: | |
| normalized_input_messages=normalize_message_roles(input_messages) | |
| scope=sentry_sdk.get_current_scope() | |
| messages_data=truncate_and_annotate_messages( | |
| normalized_input_messages, span, scope | |
| ) | |
| ifmessages_dataisnotNone: | |
| set_data_normalized( | |
| span, | |
| SPANDATA.GEN_AI_REQUEST_MESSAGES, | |
| messages_data, | |
| unpack=False, | |
| ) |
sentry_sdk/integrations/langgraph.py#L240-L260
sentry-python/sentry_sdk/integrations/langgraph.py
Lines 240 to 260 in 4f3fab3
| input_messages=None | |
| if ( | |
| len(args) >0 | |
| andshould_send_default_pii() | |
| andintegration.include_prompts | |
| ): | |
| input_messages=_parse_langgraph_messages(args[0]) | |
| ifinput_messages: | |
| normalized_input_messages=normalize_message_roles(input_messages) | |
| scope=sentry_sdk.get_current_scope() | |
| messages_data=truncate_and_annotate_messages( | |
| normalized_input_messages, span, scope | |
| ) | |
| ifmessages_dataisnotNone: | |
| set_data_normalized( | |
| span, | |
| SPANDATA.GEN_AI_REQUEST_MESSAGES, | |
| messages_data, | |
| unpack=False, | |
| ) |
alexander-alderman-webb commented Dec 11, 2025
This is okay, input messages do not have token info attached to them.
|
40e5083 into masterUh oh!
There was an error while loading. Please reload this page.
Description
Add prompt, response, and total token counts to LangGraph invocation spans.
Issues
Contributes to #5170
Reminders
tox -e linters.feat:,fix:,ref:,meta:)