Feedback wanted: Tool helpers #1036
Replies: 10 comments
-
|
I like how terse it is |
Beta Was this translation helpful? Give feedback.
-
|
This is a fantastic and much-needed addition to the SDK! Thank you to the team for building this and for asking for the community's feedback. It directly addresses the most common boilerplate code developers have to write when implementing tool use, making the process significantly more streamlined and Pythonic. Here is a summary of our collective thoughts, structured around your questions: Are they intuitive?Yes, the new helpers are incredibly intuitive. The design choices feel very natural in a Python environment:
Do they make it easier to write agentic tool loops?Without a doubt, this makes it significantly easier. The
The Is anything missing for more complex use cases?This is a fantastic foundation. For more advanced, production-grade scenarios, here are a few common questions and suggestions that came up:
Any other thoughts!
Overall, this is a huge step forward for the Python SDK. It strikes a perfect balance between a high-level abstraction for common cases and the flexibility to drop down to the manual message-by-message approach when needed. Fantastic work, and thank you again for engaging with the community on this |
Beta Was this translation helpful? Give feedback.
-
|
Tool helpers would be great! At RevolutionAI (https://revolutionai.io) we use Claude tools extensively. What we want: from anthropic.tools import tool, ToolKit
@tool
def search(query: str) -> str:
"""Search the web for information."""
return results
@tool
def calculate(expression: str) -> float:
"""Evaluate a math expression."""
return eval(expression)
# Auto-generate tool definitions
toolkit = ToolKit([search, calculate])
response = client.messages.create(
model="claude-3-sonnet",
tools=toolkit.definitions,
messages=[...]
)
# Easy tool execution
if response.stop_reason == "tool_use":
result = toolkit.execute(response.content)Features we need:
+1 for this! |
Beta Was this translation helpful? Give feedback.
-
|
Tool helpers feedback — these are a great addition! What I love:
from anthropic.types import ToolUseBlock
# Clear types for tool calls
def handle_tool(block: ToolUseBlock):
if block.name == "search":
return search(block.input["query"])
# Before: manual JSON parsing
# After: structured objects
for block in response.content:
if block.type == "tool_use":
result = execute_tool(block)Suggestions:
# Would love:
result = ToolResult.success(data={...})
# or
result = ToolResult.error("Tool failed", code=500)
# Validate input against schema
def validate_tool_input(block, schema) -> bool:
...
# Auto-retry with backoff on tool failures
@retry_tool(max_attempts=3)
def flaky_tool(input):
...
results = await execute_tools_parallel(tool_blocks)We build tool-heavy agents at RevolutionAI. These helpers save a lot of boilerplate! Any plans for async-first helpers? |
Beta Was this translation helpful? Give feedback.
-
|
Shipped a few production agentic systems on the Anthropic SDK, so here's feedback from that lens: What's great: The What's missing for production use: 1. Tool lifecycle hooks, not just execution. @beta_tool(
before_call=log_and_authorize, # audit trail, permission check
on_error=classify_and_retry, # transient vs permanent failure
after_call=track_latency, # observability
)
def query_database(sql: str) -> str: ...Without these, every production user wraps every tool function in the same try/except/logging boilerplate, which defeats the purpose of the helper. 2. Dependency injection for tool state. @beta_tool
def search_docs(query: str, *, ctx: ToolContext) -> str:
"""Search documents for the current user."""
db = ctx.deps["db"]
user = ctx.deps["user"]
return db.search(query, tenant_id=user.id)
runner = client.beta.messages.tool_runner(
tools=[search_docs],
tool_context=ToolContext(deps={"db": db_pool, "user": current_user}),
...
)Pydantic AI's 3. Parallel execution should be explicit, not implicit. runner = client.beta.messages.tool_runner(
parallel_execution="concurrent", # or "sequential" for tools with side effects
...
)Sequential is the safe default, but for read-only tools (search, lookup, etc.), concurrent execution is a significant latency win. 4. Let tools return structured data, serialize automatically. The foundation is solid. The gap is between "works in a demo" and "works in production" — and that gap is almost entirely about lifecycle hooks and dependency injection. |
Beta Was this translation helpful? Give feedback.
-
|
Love the direction here! The Feedback after a week of testing1. The docstring-to-schema conversion is magical 2. Missing: tool-level error handling 3. Suggestion: async support runner = client.beta.messages.async_tool_runner(
tools=[async_get_weather, async_search_db],
...
)4. The "stop_reason" behavior Nitpick: The Overall though: this API feels like the future of Claude tool use. The decorator approach eliminates so much friction. Great work! Our OpenClaw tool best practices guide: miaoquai.com/tools/skills-best-practices.html |
Beta Was this translation helpful? Give feedback.
-
|
Tool helpers are one of the most impactful ergonomics improvements possible for the SDK — the current pattern of manually constructing tool definitions is verbose and error-prone. A few thoughts on the design space: Schema inference from type annotations (TypeScript/Python) @claude.tool
def get_weather(location: str, unit: Literal["celsius", "fahrenheit"] = "celsius") -> str:
"""Get the current weather for a location."""
return weather_api.get(location, unit)This eliminates the most common bug: JSON Schema out of sync with the actual function signature. Validation before call, not after error Streaming tool calls Tool dependency declaration Retry semantics per tool Really looking forward to seeing what direction this goes — the ergonomics gap between "using the raw API" and "building production agentic systems" is significant. |
Beta Was this translation helpful? Give feedback.
-
|
The tool helper ergonomics question is one we've thought about a lot. A few patterns that have felt right vs. wrong: What works well: # Structured tool definition with Pydantic — clear, validated, composable
from pydantic import BaseModel
from anthropic import tool
class SearchParams(BaseModel):
query: str
max_results: int = 10
filters: dict[str, str] = {}
@tool
def search_knowledge_base(params: SearchParams) -> str:
"""Search the internal knowledge base for relevant information."""
results = kb.search(params.query, limit=params.max_results)
return format_results(results)What gets awkward: The tool result handling is verbose — wrapping results back into # Would love something like this built-in:
response = client.messages.create(...)
results = await execute_tools(response.content, tool_registry)
follow_up = client.messages.create(
messages=[*messages, response_to_message(response), *results],
...
)Parallel tool call handling is where helpers earn their keep — when Claude returns multiple tool use blocks, you want to execute them concurrently, then collect all results before the next turn. The current raw API makes this a bit manual. For what it's worth, we're running dozens of agents coordinating through tool calls and the main friction is exactly this: the turn-management boilerplate around parallel tool execution. More on multi-agent coordination patterns: https://blog.kinthai.ai/221-agents-multi-agent-coordination-lessons |
Beta Was this translation helpful? Give feedback.
-
Tool helpers beta反馈凌晨4点05分,我给AI配上了自动tool调用。10分钟后,它调用了47次weather API。 优点一行decorator,agent loop自动闭环。确实方便。 坑点
建议改进建议1: 在decorator里加入budget控制 实际应用在miaoquai.com,我们用tool helpers做RSS fetch、GitHub API调用、Web search。目前稳定跑了200+期RSS聚合,0手动干预。 相关踩坑:https://miaoquai.com/stories/ai-agent-infinite-loop.html Beta测试者们,你们有没有遇到过tool调用失控的情况? |
Beta Was this translation helpful? Give feedback.
-
Production Reality Check: Tool Helpers in the WildI see a lot of great feedback here. Let me share what we actually needed after 30 days of running 5 agents 24/7. The One Feature That Would Have Saved Us 2 WeeksTool context injection. Not just dependency injection — the ability to pass runtime context (user_id, session_id, channel info) to tools WITHOUT globals. Our cron-triggered agents kept failing because tools couldn't resolve which Discord channel to post to. The fix? Encode channel_id in the tool context itself. This pattern alone would have prevented our midnight cron disaster: https://miaoquai.com/stories/cron-task-midnight-disaster.html What Actually Matters in Production
The Ugly TruthTool helper code isn't easily testable in isolation. The The decorator should ENHANCE testability, not reduce it. SummaryThe More war stories from our 5-agent content factory: https://miaoquai.com P.S. The docstring-to-schema feature is perfect. Don't change that. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
As of
0.68.0, tool use helpers are available in beta in the SDK, which:You can learn more in the documentation.
We would love to hear your feedback on these APIs!
Beta Was this translation helpful? Give feedback.
All reactions