Skip to content

Conversation

@tinalenguyen
Copy link
Member

@tinalenguyen tinalenguyen commented Jan 17, 2026

to use:

from openai.types import Reasoning

llm=openai.responses.LLM(reasoning=Reasoning(effort="low", summary="low")

Summary by CodeRabbit

  • New Features
    • Language model operations now support configurable reasoning effort levels for enhanced flexibility
    • Compatible models automatically receive optimized default reasoning settings to ensure reliable performance
    • Reasoning configuration seamlessly integrates with all chat interactions
    • Users can customize reasoning behavior to meet specific performance and resource requirements

✏️ Tip: You can customize this high-level summary in your review settings.

@chenghao-mou chenghao-mou requested a review from a team January 17, 2026 22:59
@coderabbitai
Copy link

coderabbitai bot commented Jan 17, 2026

📝 Walkthrough

Walkthrough

This change adds support for OpenAI's reasoning parameter to the LLM class. It introduces a new reasoning field to the options, accepts it as a constructor parameter with model-specific defaults, and propagates it through the chat API calls.

Changes

Cohort / File(s) Summary
Reasoning Feature Integration
livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py
Added reasoning: NotGivenOr[Reasoning] field to _LLMOptions class. Updated LLM.__init__() to accept reasoning parameter with model-specific defaults (effort="none" for gpt-5.1/5.2, effort="minimal" otherwise when supported). Integrated reasoning propagation into chat() extra parameters. Added imports for Reasoning type and _supports_reasoning_effort helper.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 With reasoning now woven through the thread,
Our LLM thinks before it's said!
From "minimal" effort to "none" so fine,
The rabbit hops through logic's design! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'add reasoning param for openai responses LLM' clearly and directly summarizes the main change: adding a reasoning parameter to the OpenAI responses LLM class, which matches the changeset content.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py`:
- Around line 77-81: The current branch in the reasoning default logic overrides
OpenAI defaults; change the logic in the block that uses
_supports_reasoning_effort and Reasoning so that if reasoning is not provided
you do NOT force "minimal" for pre-5.1 models — instead only set
Reasoning(effort="none") for models that must default to none (e.g., "gpt-5.1"),
and otherwise leave reasoning as None/omitted so the API uses its documented
default (or explicitly set "medium" only if you intentionally want to override);
update the conditional around model and _supports_reasoning_effort to reflect
this and remove the forced "minimal" assignment.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 853bc41 and b2c4c96.

📒 Files selected for processing (1)
  • livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings

Files:

  • livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py
🧬 Code graph analysis (1)
livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py (2)
livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/models.py (1)
  • _supports_reasoning_effort (294-301)
livekit-agents/livekit/agents/utils/misc.py (1)
  • is_given (25-26)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: livekit-plugins-inworld
  • GitHub Check: livekit-plugins-cartesia
  • GitHub Check: unit-tests
  • GitHub Check: livekit-plugins-groq
  • GitHub Check: livekit-plugins-deepgram
  • GitHub Check: livekit-plugins-elevenlabs
  • GitHub Check: livekit-plugins-openai
  • GitHub Check: type-check (3.13)
  • GitHub Check: type-check (3.9)
🔇 Additional comments (5)
livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py (5)

23-23: Imports for Reasoning and model capability helper look appropriate.

Also applies to: 37-37


48-48: _LLMOptions now cleanly models the new reasoning option.


63-63: Constructor surface updated consistently.


91-91: Reasoning option is propagated into _LLMOptions as expected.


135-136: Reasoning is forwarded into request kwargs cleanly.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Copy link
Member

@davidzhao davidzhao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@tinalenguyen tinalenguyen merged commit 80f2e33 into main Jan 18, 2026
20 checks passed
@tinalenguyen tinalenguyen deleted the tina/add-reasoning-param-openai-responses branch January 18, 2026 01:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants