Skip to content

feat: add OpenAI Responses API model implementation#975

Open
notgitika wants to merge 8 commits intostrands-agents:mainfrom
notgitika:gitikavj/add-openai-responses-model
Open

feat: add OpenAI Responses API model implementation#975
notgitika wants to merge 8 commits intostrands-agents:mainfrom
notgitika:gitikavj/add-openai-responses-model

Conversation

@notgitika
Copy link
Contributor

@notgitika notgitika commented Oct 3, 2025

Description

This PR implements OpenAI Responses API as a separate model provider supporting streaming, structured output and tool calling.

Note: this commit does not include the extra capabilities that the Responses API supports such as the built-in tools and the stateful conversation runs.

Related Issues

#253

Documentation PR

TODO; coming next: will be adding a section for the Responses API within the existing openai model page

Type of Change

New feature

Testing

Added a unit test file similar to the existing test_openai.py model provider. Reusing the integ tests in the same file test_model_openai.py using pytest.parameterize with OpenAIResponses model so that I could test that the funcitonality is the same between the two models.

I ran everything in the CONTRIBUTING.md file.

hatch run integ-test
================== 81 passed, 68 skipped, 49 warnings in 106.56s (0:01:46) ==========

hatch run test-integ tests_integ/models/test_model_openai.py -v

============= 18 passed, 2 skipped in 13.44s =============

pre-commit run --all-files

  • I ran hatch run prepare --> yes, all tests pass

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works
  • I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@fsajjad
Copy link

fsajjad commented Oct 24, 2025

Hey Team, this is one of the feature my customer request and concerned as blocker from adopting Strands Agent. They mentioned Strands currently only support deprecated chatcompletion API and not responses API. When can we expect this to be merged? and will this also enable streaming structured output?

Copy link
Member

@dbschmigelski dbschmigelski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A major version bump is proposed in #1370.

Adding a comment here that we would need to consider when the user has v1 installed compared to v2.

For example

import pydantic
from packaging import version

# Detect the version once at the module level
PYDANTIC_V2 = version.parse(pydantic.VERSION) >= version.parse("2.0.0")

if PYDANTIC_V2:
    from pydantic import ConfigDict
    def get_model_fields(model):
        return model.model_fields
else:
    def get_model_fields(model):
        return model.__fields__

@notgitika notgitika force-pushed the gitikavj/add-openai-responses-model branch from 7c05e84 to 8eea5d9 Compare January 21, 2026 06:16
@codecov
Copy link

codecov bot commented Jan 21, 2026

Codecov Report

❌ Patch coverage is 91.20879% with 24 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/strands/models/openai_responses.py 92.22% 2 Missing and 19 partials ⚠️
src/strands/models/__init__.py 0.00% 3 Missing ⚠️

📢 Thoughts on this report? Let us know!

Copy link

@notowen333 notowen333 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some refactoring is required to make this maintainable. Otherwise LGTM

…-responses-model

# Conflicts:
#	tests_integ/models/test_model_openai.py
@notgitika notgitika force-pushed the gitikavj/add-openai-responses-model branch from 6a70455 to 83e2982 Compare February 16, 2026 22:10
@github-actions github-actions bot removed the size/xl label Feb 16, 2026
@github-actions
Copy link

github-actions bot commented Mar 2, 2026

Assessment: Comment

This PR adds a well-structured OpenAI Responses API model implementation with comprehensive test coverage (~91%). The core functionality is solid and the integration test reuse via parameterization is a good approach.

Review Categories
  • API Consistency: Consider adding client parameter for dependency injection parity with OpenAIModel
  • Exports: Module needs to be exported from strands.models.__init__.py (flagged by Unshure)
  • Code Style: Logging should follow field-value pairs pattern per AGENTS.md
  • Minor: Unused variable in test fixture, TODO comment for latency

The implementation demonstrates a thorough understanding of the Responses API and handles edge cases well (reasoning content, incomplete responses, tool call accumulation). The existing review comments about streaming typed events can be addressed in a follow-up iteration.

@Unshure
Copy link
Member

Unshure commented Mar 2, 2026

/strands can you address the comments? Also can you leave a comment with a suggestion on how to fix this integ test failure:

[8](https://github.com/strands-agents/sdk-python/actions/runs/22079285449/job/63801218637?pr=975#step:6:1229)
E   ImportError: OpenAIResponsesModel requires openai>=2.0.0 (found 1.109.1). Install/upgrade with: pip install -U openai. For older SDKs, use OpenAIModel (Chat Completions).
ERROR tests_integ/test_summarizing_conversation_manager_integration.py - ImportError while importing test module '/home/runner/work/sdk-python/sdk-python/tests_integ/test_summarizing_conversation_manager_integration.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../../.local/share/hatch/env/virtual/strands-agents/s8Ez09FR/hatch-test.py3.10/lib/python3.10/site-packages/_pytest/python.py:507: in importtestmodule
    mod = import_path(
../../../.local/share/hatch/env/virtual/strands-agents/s8Ez09FR/hatch-test.py3.10/lib/python3.10/site-packages/_pytest/pathlib.py:587: in import_path
    importlib.import_module(module_name)
/opt/hostedtoolcache/Python/3.10.19/x64/lib/python3.10/importlib/__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1050: in _gcd_import
    ???
<frozen importlib._bootstrap>:1027: in _find_and_load
    ???
<frozen importlib._bootstrap>:1006: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:688: in _load_unlocked
    ???
../../../.local/share/hatch/env/virtual/strands-agents/s8Ez09FR/hatch-test.py3.10/lib/python3.10/site-packages/_pytest/assertion/rewrite.py:197: in exec_module
    exec(co, module.__dict__)
tests_integ/test_summarizing_conversation_manager_integration.py:24: in <module>
    from tests_integ.models import providers
tests_integ/models/providers.py:19: in <module>
    from strands.models.openai_responses import OpenAIResponsesModel
src/strands/models/openai_responses.py:38: in <module>
    raise ImportError(
E   ImportError: OpenAIResponsesModel requires openai>=2.0.0 (found 1.109.1). Install/upgrade with: pip install -U openai. For older SDKs, use OpenAIModel (Chat Completions).

- Add OpenAIResponsesModel to lazy loading in strands.models.__init__.py
- Remove 'Step X:' prefixes from logging to match project conventions
- Fix unused variable in test fixture by removing model_name from params
@github-actions
Copy link

github-actions bot commented Mar 2, 2026

Integration Test Failure Fix

The error occurs because tests_integ/models/providers.py has a top-level import of OpenAIResponsesModel (line 19), which triggers the openai version check at module load time. Since the CI environment has openai 1.109.1 (< 2.0.0), the import fails before any tests can run.

Suggested Fix

Update tests_integ/models/providers.py to use lazy import inside the factory lambda:

# Remove the top-level import:
# from strands.models.openai_responses import OpenAIResponsesModel  # DELETE THIS LINE

# Change the provider definition to use lazy import:
openai_responses = ProviderInfo(
    id="openai_responses",
    environment_variable="OPENAI_API_KEY",
    factory=lambda: __import__(
        "strands.models.openai_responses", fromlist=["OpenAIResponsesModel"]
    ).OpenAIResponsesModel(
        model_id="gpt-4o",
        client_args={
            "api_key": os.getenv("OPENAI_API_KEY"),
        },
    ),
)

Alternatively, for cleaner code:

def _create_openai_responses_model():
    from strands.models.openai_responses import OpenAIResponsesModel
    return OpenAIResponsesModel(
        model_id="gpt-4o",
        client_args={"api_key": os.getenv("OPENAI_API_KEY")},
    )

openai_responses = ProviderInfo(
    id="openai_responses",
    environment_variable="OPENAI_API_KEY",
    factory=_create_openai_responses_model,
)

This defers the import until the model is actually instantiated, avoiding the version check in environments that don't have openai >= 2.0.0.

@notgitika
Copy link
Contributor Author

I opted for a different approach for the integ test fix.

The lazy import approach suggested only fixes the issue in providers.py but test_model_openai.py also imports OpenAIResponsesModel drirectly for its tests. We'd still crash at test collection time there I believe.

With this flag approach, both files can check _openai_responses_available and skip the responses model when its not installed

@github-actions
Copy link

github-actions bot commented Mar 3, 2026

Assessment: Approve

The PR has addressed the key issues from the previous review:

Changes Since Last Review
  • Module export - OpenAIResponsesModel now exported via lazy loading in __init__.py
  • Integration test failure - Clean fix using _openai_responses_available flag with try/except
  • Logging style - Removed "Step X:" prefixes
  • Unused variable - Test fixture now correctly unpacks model_class, model_id

The conditional import approach for handling older openai versions is well-designed - it allows the module to gracefully skip when openai < 2.0.0 rather than crashing at import time.

Minor suggestions from previous review (client injection parity, TODO comment) can be addressed in follow-up iterations. The core implementation is solid and ready for merge.

Copy link
Member

@Unshure Unshure left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Integ test failing for unrelated reason

@Unshure Unshure dismissed dbschmigelski’s stale review March 3, 2026 20:37

Comments addressed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants