构建多智能体 AI 系统 (Multi-Agent AI Systems)是个头大的事儿。需要考虑分布式架构、通信机制、编排流程等一堆复杂的东西。不过有了 llama-agents,一切都变得简单多了。它提供了一个强大的框架, 可以帮我们轻松搞定所有这些难题!
llama-agents 的核心思想就是让每个 agent 都是一个独立运行的微服务。用户可以自由定制它们的功能和交互方式,还可以灵活部署、监控和扩展。无需再为分布式架构烦恼,只需专注于你的应用逻辑。
这个框架目前还是alpha版本,但是已经提供了很多的功能。比如说,它提供了一个基于 LLM 的编排器,能智能地决定调用哪些 agent 来完成任务。你也可以自己手动定义 agent 之间的交互流程。而且,系统还内置了强大的可观测性工具,让你一眼就能掌握每个 agent 的运行状况。
llama-agents 还搞定了部署的问题! 可以独立启动、扩容每个 agent 服务以及控制面板,轻松管理整个系统。
官方开源地址如下:
https://github.com/run-llama/llama-agents
https://www.llamaindex.ai/blog/introducing-llama-agents-a-powerful-framework-for-building-production-multi-agent-ai-systems
让我们看一个更复杂的示例:查询重写 RAG 系统。该系统将重写用户查询以改进检索,然后使用重写的查询对文档执行 RAG。
此示例介绍了如何创建一个更复杂的系统,将query rewrite与 RAG 相结合,以提高问答能力。
import dotenv
dotenv.load_dotenv() # our .env defines OPENAI\_API\_KEY
from llama_index.core import VectorStoreIndex, Document
from llama_index.core.agent import FnAgentWorker
from llama_index.core import PromptTemplate
from llama_index.core.query_pipeline import QueryPipeline
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_agents import (
AgentService,
ControlPlaneServer,
SimpleMessageQueue,
PipelineOrchestrator,
ServiceComponent,
)
from llama_agents.launchers import LocalLauncher
from llama_index.llms.openai import OpenAI
import logging
# change logging level to enable or disable more verbose logging
logging.getLogger("llama\_agents").setLevel(logging.INFO)
# Load and index your document
docs = [Document(text="The rabbit is a small mammal with long ears and a fluffy tail. His name is Peter.")]
index = VectorStoreIndex.from_documents(docs)
# Define a query rewrite agent
HYDE_PROMPT_STR = (
"Please rewrite the following query to include more detail:\n{query\_str}\n"
)
HYDE_PROMPT_TMPL = PromptTemplate(HYDE_PROMPT_STR)
def run\_hyde\_fn(state):
prompt_tmpl, llm, input_str = (
state["prompt\_tmpl"],
state["llm"],
state["\_\_task\_\_"].input,
)
qp = QueryPipeline(chain=[prompt_tmpl, llm])
output = qp.run(query_str=input_str)
state["\_\_output\_\_"] = str(output)
return state, True
hyde_agent = FnAgentWorker(
fn=run_hyde_fn,
initial_state={"prompt\_tmpl": HYDE_PROMPT_TMPL, "llm": OpenAI()}
).as_agent()
# Define a RAG agent
def run\_rag\_fn(state):
retriever, llm, input_str = (
state["retriever"],
state["llm"],
state["\_\_task\_\_"].input,
)
query_engine = RetrieverQueryEngine.from_args(retriever, llm=llm)
response = query_engine.query(input_str)
state["\_\_output\_\_"] = str(response)
return state, True
rag_agent = FnAgentWorker(
fn=run_rag_fn,
initial_state={"retriever": index.as_retriever(), "llm": OpenAI()}
).as_agent()
# Set up the multi-agent system
message_queue = SimpleMessageQueue()
query_rewrite_service = AgentService(
agent=hyde_agent,
message_queue=message_queue,
description="Query rewriting service",
service_name="query\_rewrite",
)
rag_service = AgentService(
agent=rag_agent,
message_queue=message_queue,
description="RAG service",
service_name="rag",
)
# Create the pipeline
pipeline = QueryPipeline(chain=[
ServiceComponent.from_service_definition(query_rewrite_service),
ServiceComponent.from_service_definition(rag_service),
])
orchestrator = PipelineOrchestrator(pipeline)
control_plane = ControlPlaneServer(
message_queue=message_queue,
orchestrator=orchestrator,
)
# Set up the launcher
launcher = LocalLauncher(
[query_rewrite_service, rag_service],
control_plane,
message_queue,
)
# Run a query
result = launcher.launch_single("Tell me about rabbits")
print(result)
llama-agents 为构建复杂的多智能体人工智能系统提供了强大、灵活的框架。无论您是设计新想法原型还是扩大生产规模,llama-agents 都能提供您实现 AI 愿景所需的工具。查看存储库以了解更多信息,尤其是我们的示例库。