For all its power, general AI hits a wall of context. It's a wall built from the nuanced workflows, domain-specific data, and hard-won intuition that define your world. From scientific research, financial analysis, to complex engineering, generic models can't climb this wall. They can't speak your language.
The AWorld Thesis is that the true scaling of AI is achieved by enabling experts like you to build a gate in that wall.
AWorld with its CLI mode is the platform designed for this. We provide the fundamental recipe for you, the expert, to infuse your knowledge and craft unique insights into fleets of autonomous agents. This is how we move beyond generic promise to specific, robust applications that navigate your world with precision.
The journey from an idea to an evolved, autonomous agent begins at your fingertips.
Create a .env file in the AWorld/aworld-cli to configure the base model for both the AWorld Agent and any agents it creates. Add the following content:
LLM_MODEL_NAME="your_model_name, Claude-Sonnet-4 or above suggested"
LLM_PROVIDER="openai"
LLM_API_KEY="your_model_api_key"
LLM_BASE_URL="your_base_url"Install and Enter AWorld-CLI
git clone https://github.com/inclusionAI/AWorld && cd AWorld
conda create -n aworld_env python=3.11 -y && conda activate aworld_env
pip install -e . && cd aworld-cli && pip install -e .
aworld-cliInstantly scaffold an agent from a natural language description of your task, such as "create an agent that can generate HTML report". AWorld-CLI handles the boilerplate, so you can focus on the logic.
Let AWorld Agent make an agent for you

This command generates a fully operational agent file referencing our carefully curated Verified Skills as the solid foundation and a global configuration, ready for immediate execution.
Once it's generated, your agent is a permanent, reusable tool in your ~/.agents folder.
When you automate the creation of a new agent, AWorld-CLI doesn't start from scratch. It intelligently references these battle-tested Skills for robutsness, and simultaneously learns from your custom skills in the ~/agents folder. This dual inheritance ensures every agent is not only reliable from the start, adapted to your requirements.
| Skills | Description |
|---|---|
| 🚀 PPT Agent | Creates polished presentations from documents, outlines, or data. |
| 🧠 DeepSearch Agent | Conducts comprehensive, multi-source research on a topic and synthesizes a structured report. |
Prompt the AWorld Agent to execute your newly created agent on a task and watch it work, such as "Let the html agent generate an html report introducing Beckham". Every call, action, and observation is captured in a detailed trajectory log, saved right to your local directory.
Let the created agent do your job

If the agent's performance isn't perfect in your opinion, you have a spectrum of powerful options for refinement.
Manual Evolution
You are the expert. Open the generated Python file and fine-tune the prompts, logic, or tool usage directly. You have full control.
Exciting: AI-Assisted Evolution
This is where AWorld truly shines! Prompt AWorld with your expertise and desired changes, such as "help me optimize the html agent so that it can browse web, download and insert image into the html". It then tasks our built-in Optimizer Agent—a specialized code agent—to act as your AI pair programmer. Because all agents you create extend from a unified AWorld base class, the Optimizer Agent has a global understanding of the agent's structure. This allows it to reason about and precisely modify the agent's code to implement your logic, evolving its capabilities far beyond simple prompt tuning.
AI evolve your agent to make it more professional

Our Vista: Self-Evolution
This is the future. Instead of you providing explicit prompts, the system automatically detects sub-optimal performance based on a reward signal (e.g., failed validation, deviation from a verified Skill). It then triggers an autonomous optimization loop, evolving the agent on its own. This is evaluation-driven evolution, where the agent gains true self-awareness and improves without constant human intervention.
Once you're satisfied with your optimized agent, it is permanent and reusable in your ~/agents folder.
In AWorld, an agent is a model enhanced with tools. But real-world problems often demand more than a single agent. To solve this, AWorld gives you full control with flexible build paths, allowing you to manually craft complex, multi-agent systems for collaboration.
-
Flexible Multi-Agent Orchestration, Rich Environment Sandbox, Comprehensive Observability Tracing Docs
-
Parallel Tasks Runtime, Streaming Response Docs
-
Human in the Loop (HITL) Docs
Launch our official DeepResearch team in the AWorld Playground to see AI collaboration live. Inspect its source, run it end-to-end, and get inspired.
From User to Creator: Get Your Agent Featured!
Ready to build your own? Use the aworld-cli to forge an agent with your unique expertise, captured in its skill.md file.
To get your creation featured, simply submit a Pull Request with your skill.md to: AWorld/examples/Custom_Skills/
We'll showcase the best community agents here in the Playground. Let your expertise evolve into a professional agent, gain recognition, and empower the entire community to experience the amazing tools you've built.
AWorld's mission is to handle the complexity so you can focus on innovation. This section showcases cutting-edge multi-agent systems built with AWorld, advancing toward AGI.
-
FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling arxiv, 2025. paper, code, model, dataset
Zengzhuang Xu, Bingguang Hao, Zechuan Wang, Yuntao Wen, Maolin Wang, etc.
-
From Failure to Mastery: Generating Hard Samples for Tool-use Agents arxiv, 2026. paper, code, model, dataset
Bingguang Hao, Zengzhuang Xu, Yuntao Wen, Xinyi Xu, Yang Liu, etc.
-
AWorld: Orchestrating the Training Recipe for Agentic AI. arxiv, 2025. paper, code, model
Chengyue Yu, Siyuan Lu, Chenyi Zhuang, Dong Wang, Qintong Wu, etc.
-
FunReason: Enhancing Large Language Models' Function Calling via Self-Refinement Multiscale Loss and Automated Data Refinement. arxiv, 2025. paper, model
Bingguang Hao, Maolin Wang, Zengzhuang Xu, Cunyin Peng, etc.
-
Exploring Superior Function Calls via Reinforcement Learning. arxiv, 2025. paper, code
Bingguang Hao, Maolin Wang, Zengzhuang Xu, Yicheng Chen, etc.
-
RAG-R1 : Incentivize the Search and Reasoning Capabilities of LLMs through Multi-query Parallelism. arxiv, 2025. paper, code, model
Zhiwen Tan, Jiaming Huang, Qintong Wu, Hongxuan Zhang, Chenyi Zhuang, Jinjie Gu
-
V2P: From Background Suppression to Center Peaking for Robust GUI Grounding Task. arxiv, 2025. paper, code
Jikai Chen, Long Chen, Dong Wang, Leilei Gan, Chenyi Zhuang, Jinjie Gu
-
Don’t Just Fine-tune the Agent, Tune the Environment arxiv, 2025. paper
Siyuan Lu, Zechuan Wang, Hongxuan Zhang, Qintong Wu, Leilei Gan, Chenyi Zhuang, etc.
-
Profile-Aware Maneuvering: A Dynamic Multi-Agent System for Robust GAIA Problem Solving by AWorld. arxiv, 2025. paper, code
Zhitian Xie, Qintong Wu, Chengyue Yu, Chenyi Zhuang, Jinjie Gu
-
Recon-Act: A Self-Evolving Multi-Agent Browser-Use System via Web Reconnaissance, Tool Generation, and Task Execution. arxiv, 2025. paper, code
Kaiwen He, Zhiwei Wang, Chenyi Zhuang, Jinjie Gu
Our roadmap includes expanding our AI for Science & Business initiative, deepening our self-evolution capabilities, and growing our library of community-contributed Skills.
We warmly welcome developers, researchers, and domain experts to join us. Whether you're enhancing the framework or contributing a Skill from your field of expertise, your work is valuable.
For academic citations or wish to contact us, please use the following BibTeX entry:
@misc{yu2025aworldorchestratingtrainingrecipe,
title={AWorld: Orchestrating the Training Recipe for Agentic AI},
author={Chengyue Yu and Siyuan Lu and Chenyi Zhuang and Dong Wang and Qintong Wu and Zongyue Li and Runsheng Gan and Chunfeng Wang and Siqi Hou and Gaochi Huang and Wenlong Yan and Lifeng Hong and Aohui Xue and Yanfeng Wang and Jinjie Gu and David Tsai and Tao Lin},
year={2025},
eprint={2508.20404},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.20404},
}