Skip to content

Conversation

@steven10a
Copy link
Collaborator

@steven10asteven10a commented Oct 30, 2025

Fixing safety_identifier implementation

  • Previously we were incorrectly adding safety_identifier as a HTTP header
  • Changed to use the supported safety_identifier parameter

Notesafety_identifier is only supported by the normal OpenAI responses and chat.completions endpoint. Not implemented for Azure or third parties.

CopilotAI review requested due to automatic review settings October 30, 2025 20:26
Copy link

CopilotAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR replaces the header-based safety identifier approach with a parameter-based approach for tracking guardrails library usage with the OpenAI API. The change removes the _openai_utils module that injected headers and instead conditionally adds a safety_identifier parameter to API calls when using the official OpenAI API (excluding Azure and alternative providers).

Key changes:

  • Removed _openai_utils.py module containing header-based safety identifier logic
  • Added _supports_safety_identifier() helper function in three locations to detect compatible clients
  • Updated all OpenAI client instantiations to remove prepare_openai_kwargs() wrapper calls
  • Added conditional safety_identifier parameter to LLM API calls based on client compatibility

Reviewed Changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 5 comments.

Show a summary per file
FileDescription
src/guardrails/_openai_utils.pyDeleted header-based safety identifier utilities module
src/guardrails/checks/text/llm_base.pyAdded safety identifier support detection and parameter injection for LLM checks
src/guardrails/resources/chat/chat.pyAdded safety identifier support detection and parameter injection for chat completions
src/guardrails/resources/responses/responses.pyAdded safety identifier support detection and parameter injection for responses API
src/guardrails/client.pyRemoved prepare_openai_kwargs() calls from client initialization
src/guardrails/agents.pyRemoved prepare_openai_kwargs() calls from default context creation
src/guardrails/runtime.pyRemoved prepare_openai_kwargs() calls from runtime default context
src/guardrails/evals/guardrail_evals.pyRemoved prepare_openai_kwargs() calls from eval client creation
src/guardrails/checks/text/moderation.pyRemoved prepare_openai_kwargs() calls from moderation client
src/guardrails/utils/create_vector_store.pyRemoved prepare_openai_kwargs() calls from vector store client
tests/unit/test_safety_identifier.pyAdded comprehensive tests for safety identifier support detection
tests/unit/test_openai_utils.pyDeleted tests for removed header-based utilities
tests/unit/test_agents.pyRemoved assertions checking for safety identifier headers

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

mock_client.base_url="https://example.openai.azure.com/v1"
mock_client.__class__.__name__="AsyncAzureOpenAI"

# Azure detection happens via isinstance check, but we can test with class name
Copy link

CopilotAIOct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is misleading. The test creates a real Azure client and tests it with isinstance(), which is correct for llm_base.py. However, the chat.py and responses.py implementations use class name string matching instead, not isinstance checks.

Suggested change
# Azure detection happens via isinstance check, but we can test with class name
# In llm_base.py, Azure detection happens via isinstance check (this test is correct).
# Note: chat.py and responses.py use class name string matching instead.

Copilot uses AI. Check for mistakes.
Copy link

CopilotAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 14 out of 14 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 65 to 71
__all__= [
"LLMConfig",
"LLMOutput",
"LLMErrorOutput",
"create_llm_check_fn",
"LLMOutput",
"create_error_result",
"create_llm_check_fn",
]
Copy link

CopilotAIOct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The __all__ list was reordered alphabetically, but this appears to be an incidental change unrelated to the PR's main purpose of refactoring the safety identifier. The previous ordering grouped related items together ('LLMConfig', 'LLMOutput', 'LLMErrorOutput', then functions). Consider reverting this change to keep the PR focused on its core objective.

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

@gabor-openaigabor-openai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ty!

Copy link
Collaborator

@gabor-openaigabor-openai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM TY!

@gabor-openaigabor-openai changed the title fixing safety identifierSet safety_identifier to openai-guardrails-pythonOct 30, 2025
@gabor-openaigabor-openai merged commit 06fa983 into mainOct 30, 2025
3 checks passed
@gabor-openaigabor-openai deleted the dev/steven/safety_identifiers branch October 30, 2025 20:46
Sign up for freeto join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

@steven10a@gabor-openai