Skip to content

fix: sync base_url and api_base for litellm multi-provider routing#5140

Open
devin-ai-integration[bot] wants to merge 2 commits intomainfrom
devin/1774614062-fix-litellm-base-url-routing
Open

fix: sync base_url and api_base for litellm multi-provider routing#5140
devin-ai-integration[bot] wants to merge 2 commits intomainfrom
devin/1774614062-fix-litellm-base-url-routing

Conversation

@devin-ai-integration
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot commented Mar 27, 2026

Summary

Fixes #5139. When LLM(base_url=...) is used without explicitly setting api_base, litellm never receives the custom endpoint because it reads api_base, not base_url. This causes requests to fall back to api.openai.com, breaking multi-provider setups (e.g. Scaleway + Nebius with different API keys and endpoints).

The fix adds a 4-line sync in LLM.__init__ so that whichever parameter the caller provides, both fields are populated:

  • Only base_url provided → api_base is set to match
  • Only api_base provided → base_url is set to match
  • Both provided → both keep their explicit values
  • Neither provided → both stay None

This ensures _prepare_completion_params always includes api_base in the kwargs passed to litellm.completion().

Review & Testing Checklist for Human

  • Verify all internal litellm call sites go through _prepare_completion_params: This fix works because _prepare_completion_params passes api_base/api_key to litellm.completion(). If any code path (e.g. agent reasoning loops, tool selection) calls litellm without going through _prepare_completion_params, those calls would still lack the custom endpoint. Grep for direct litellm.completion / litellm.acompletion calls outside this path.
  • End-to-end test with a real multi-provider crew: Create two LLM instances with different base_url/api_key values, assign them to different agents in the same crew, and run a task. Confirm each agent's requests go to the correct endpoint (e.g. via debug logging or network inspection).
  • Check native provider paths are unaffected: The sync only runs in LLM.__init__ (the litellm fallback path). Native providers (OpenAI, Anthropic, Gemini, etc.) use their own __init__ via BaseLLM. Verify no regressions for LLM(model="gpt-4o") or LLM(model="anthropic/claude-3-sonnet").

Notes

  • The issue also suggests a monkey-patch approach for litellm.completion/litellm.acompletion. This PR intentionally does not implement that — the simpler param-syncing fix should be sufficient since _prepare_completion_params already passes api_base/api_key to every litellm call.
  • 9 new unit tests added covering all sync scenarios including _prepare_completion_params, multi-provider independence, litellm.completion mock verification, and copy/deepcopy preservation.
  • tests (3.13) CI job was cancelled (infrastructure, not a failure); all other required checks (lint, type-checker, tests on 3.10/3.11/3.12) passed.

Link to Devin session: https://app.devin.ai/sessions/3ccb7f0ac5ba4e0c95dc59b9556576a3

When LLM(base_url=...) is used without api_base, litellm does not
receive the custom endpoint because it reads api_base (not base_url).
This causes requests to fall back to api.openai.com, breaking
multi-provider setups (e.g. Scaleway + Nebius).

The fix syncs base_url and api_base in LLM.__init__:
- If only base_url is provided, api_base is set to match
- If only api_base is provided, base_url is set to match
- If both are provided, both keep their explicit values

Closes #5139

Co-Authored-By: João <joao@crewai.com>
@devin-ai-integration
Copy link
Copy Markdown
Contributor Author

Prompt hidden (unlisted session)

@devin-ai-integration
Copy link
Copy Markdown
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] fix: CrewAI 1.12.x LLM routing - litellm does not receive api_base/api_key for multi-provider setups

0 participants