Summary
Add additional provider backends that conform to the existing BaseProvider interface so users can choose their LLM vendor without changing agent logic. Implementations should mirror the capabilities of OpenAIProvider while remaining vendor-agnostic at the agent level.
Motivation
- Increase flexibility and reduce vendor lock-in.
- Enable users to leverage their preferred LLMs and enterprise contracts.
- Standardize provider behavior behind
BaseProvider for consistent agent UX.
Scope
- Ensure the providers implement the three async methods defined in
BaseProvider:
generate_response(messages, system_prompt=None, triggered_by_user_message=False, **kwargs) -> str
should_respond(messages, elapsed_time, context, **kwargs) -> bool
calculate_sleep_time(wake_up_pattern, min_sleep_time, max_sleep_time, context, **kwargs) -> tuple[int, str]
- Wire each provider for easy import and usage in the agent.
Non-Goals
- Changes to decision engines or the agent’s scheduling logic.
- Adding tests (can be tracked separately if needed).
Current Architecture (for reference)
- Interface:
proactiveagent/providers/base.py (BaseProvider)
- Example implementation:
proactiveagent/providers/openai_provider.py
- Provider usage:
proactiveagent/agent.py (accepts a BaseProvider instance)
Design and Implementation Details
- Create one file per provider in
proactiveagent/providers/:
- Each class should:
- Accept
model: str and provider-specific **kwargs in __init__ and store configuration.
- Use the vendor’s official SDKs/clients if available; otherwise, a minimal HTTP client.
- Respect the same message schema used in
OpenAIProvider (list of dicts with role and content).
- Keep behavior consistent with
OpenAIProvider for system prompts and triggered_by_user_message.
- Implement vendor-appropriate logic for
should_respond and calculate_sleep_time while returning the same types and honoring the min/max constraints for sleep time.
- Update
proactiveagent/providers/__init__.py to export new providers via __all__.
- Document environment variables and config keys required by each provider (e.g., API keys, endpoints, regions).
Minimal Provider Skeleton
from typing import List, Dict, Any, Optional
from .base import BaseProvider
class AnthropicProvider(BaseProvider):
def __init__(self, model: str, **kwargs):
super().__init__(model, **kwargs)
# init vendor client here
async def generate_response(
self,
messages: List[Dict[str, str]],
system_prompt: Optional[str] = None,
triggered_by_user_message: bool = False,
**kwargs
) -> str:
# call vendor API and return text
return "..."
async def should_respond(
self,
messages: List[Dict[str, str]],
elapsed_time: int,
context: Dict[str, Any],
**kwargs
) -> bool:
# vendor-backed decision or lightweight heuristic
return True
async def calculate_sleep_time(
self,
wake_up_pattern: str,
min_sleep_time: int,
max_sleep_time: int,
context: Dict[str, Any],
**kwargs
) -> tuple[int, str]:
# compute int within [min_sleep_time, max_sleep_time], plus reasoning
return min_sleep_time, "reason"
Developer Experience
- Provide simple usage examples in
examples/ showing how to instantiate ProactiveAgent with each new provider (similar to existing examples).
- Document provider selection and required env vars in
README.md and proactiveagent/providers/README.md.
Additional Context
- Reference
OpenAIProvider for structure and behavior parity.
- Ensure async boundaries are respected to avoid blocking the agent loop.
Summary
Add additional provider backends that conform to the existing
BaseProviderinterface so users can choose their LLM vendor without changing agent logic. Implementations should mirror the capabilities ofOpenAIProviderwhile remaining vendor-agnostic at the agent level.Motivation
BaseProviderfor consistent agent UX.Scope
BaseProvider:generate_response(messages, system_prompt=None, triggered_by_user_message=False, **kwargs) -> strshould_respond(messages, elapsed_time, context, **kwargs) -> boolcalculate_sleep_time(wake_up_pattern, min_sleep_time, max_sleep_time, context, **kwargs) -> tuple[int, str]Non-Goals
Current Architecture (for reference)
proactiveagent/providers/base.py(BaseProvider)proactiveagent/providers/openai_provider.pyproactiveagent/agent.py(accepts aBaseProviderinstance)Design and Implementation Details
proactiveagent/providers/:anthropic_provider.pymodel: strand provider-specific**kwargsin__init__and store configuration.OpenAIProvider(list of dicts withroleandcontent).OpenAIProviderfor system prompts andtriggered_by_user_message.should_respondandcalculate_sleep_timewhile returning the same types and honoring the min/max constraints for sleep time.proactiveagent/providers/__init__.pyto export new providers via__all__.Minimal Provider Skeleton
Developer Experience
examples/showing how to instantiateProactiveAgentwith each new provider (similar to existing examples).README.mdandproactiveagent/providers/README.md.Additional Context
OpenAIProviderfor structure and behavior parity.