CrewAI:一个集众家所长的MutiAgent框架(内有基于Ollama的本地模型应用CrewAI的视频教程)

picture.image

在前面的文章里《一文探秘LLM应用开发(26)-Prompt(架构模式之Agent框架AutoGPT、AutoGen等) 》,笔者曾介绍了一些诸如AutoGen、ChatDev这样的多Agent框架。最近行业内又出现一款不错的框架——CrewAI,可谓是站在AutoGen这样的框架肩膀上,以实际投产为目标,兼具AutoGen对话代理的灵活性和ChatDev的领域流程化的优点,规避了AutoGen缺乏框架层面的流程支持和ChatDev领域流程过于狭窄不够泛化的问题,支持动态的泛场景的流程设计,能够无缝适应开发和生产工作流程。也就是说,在CrewAI中可以 像AutoGen一样结合场景定义自己的角色,又能像ChatDev那样约定一定的流程执行,让这些Agents能够更好完成特定的复杂目标。该项目现已收获3.9K星,并获得ProductHunt排名第二的好位置。

picture.image

https://github.com/joaomdmoura/crewAI

CrewAI具备以下 一些关键feature:

  • 基于角色的Agent设计:定制具有特定角色、目标和工具的Agent。
  • Agent间自主委托:Agent 可自主委派任务,并在Agent 之间进行询问,从而提高解决问题的效率。
  • 灵活的任务管理: 使用 可定制的工具定义任务,并将其动态分配给Agent。
  • 流程驱动(最大的亮点): 目前仅支持顺序( sequential ) 任务执行 , 但在规划中的有更高阶的流程定义,如共识和分层流程。

picture.image

创建一个流程大体如下:


          
import os
          
from crewai import Agent, Task, Crew, Process
          

          
os.environ["OPENAI_API_KEY"] = "YOUR KEY"
          

          
# You can choose to use a local model through Ollama for example.
          
#
          
# from langchain.llms import Ollama
          
# ollama_llm = Ollama(model="openhermes")
          

          
# Install duckduckgo-search for this example:
          
# !pip install -U duckduckgo-search
          

          
from langchain.tools import DuckDuckGoSearchRun
          
search_tool = DuckDuckGoSearchRun()
          

          
# Define your agents with roles and goals
          
researcher = Agent(
          
  role='Senior Research Analyst',
          
  goal='Uncover cutting-edge developments in AI and data science',
          
  backstory="""You work at a leading tech think tank.
          
  Your expertise lies in identifying emerging trends.
          
  You have a knack for dissecting complex data and presenting
          
  actionable insights.""",
          
  verbose=True,
          
  allow_delegation=False,
          
  tools=[search_tool]
          
  # You can pass an optional llm attribute specifying what mode you wanna use.
          
  # It can be a local model through Ollama / LM Studio or a remote
          
  # model like OpenAI, Mistral, Antrophic of others (https://python.langchain.com/docs/integrations/llms/)
          
  #
          
  # Examples:
          
  # llm=ollama_llm # was defined above in the file
          
  # llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7)
          
)
          
writer = Agent(
          
  role='Tech Content Strategist',
          
  goal='Craft compelling content on tech advancements',
          
  backstory="""You are a renowned Content Strategist, known for
          
  your insightful and engaging articles.
          
  You transform complex concepts into compelling narratives.""",
          
  verbose=True,
          
  allow_delegation=True,
          
  # (optional) llm=ollama_llm
          
)
          

          
# Create tasks for your agents
          
task1 = Task(
          
  description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
          
  Identify key trends, breakthrough technologies, and potential industry impacts.
          
  Your final answer MUST be a full analysis report""",
          
  agent=researcher
          
)
          

          
task2 = Task(
          
  description="""Using the insights provided, develop an engaging blog
          
  post that highlights the most significant AI advancements.
          
  Your post should be informative yet accessible, catering to a tech-savvy audience.
          
  Make it sound cool, avoid complex words so it doesn't sound like AI.
          
  Your final answer MUST be the full blog post of at least 4 paragraphs.""",
          
  agent=writer
          
)
          

          
# Instantiate your crew with a sequential process
          
crew = Crew(
          
  agents=[researcher, writer],
          
  tasks=[task1, task2],
          
  verbose=2,# You can set it to 1 or 2 to different logging levels
          
  process=Process.sequential 
          
)
          

          
# Get your crew to work!
          
result = crew.kickoff()
          

          
print("######################")
          
print(result)
      

如上,定义了两个Agent,researcher负责调研收集信息,writer基于这些信息写作,然后根据任务顺序执行的流程进行执行。 对于 任务本身也可以进行自定义,比如 指定特定的信源收集信息,如下是在reddit的 LocalLLaMA版 上获取信息 。


          
# pip install praw 
          
from langchain.tools import tool
          

          
class BrowserTool():
          
    @tool("Scrape reddit content")
          
    def scrape_reddit(max_comments_per_post=5):
          
        """Useful to scrape a reddit content"""
          
        reddit = praw.Reddit(
          
            client_id="your-client-id",
          
            client_secret="your-client-secret",
          
            user_agent="your-user-agent",
          
        )
          
        subreddit = reddit.subreddit("LocalLLaMA")
          
        scraped_data = []
          

          
        for post in subreddit.hot(limit=10):
          
            post_data = {"title": post.title, "url": post.url, "comments": []}
          

          
            try:
          
                post.comments.replace_more(limit=0)  # Load top-level comments only
          
                comments = post.comments.list()
          
                if max_comments_per_post is not None:
          
                    comments = comments[:5]
          

          
                for comment in comments:
          
                    post_data["comments"].append(comment.body)
          

          
                scraped_data.append(post_data)
          

          
            except praw.exceptions.APIException as e:
          
                print(f"API Exception: {e}")
          
                time.sleep(60)  # Sleep for 1 minute before retrying
          

          
        return scraped_data
      

而使用它只需要将search_tool更换为BrowserTool().scrape_reddit。同时,它也支持将human作为工具编织在Agent中。对于这样做的好处可以参考文章《人充当LLM Agent的工具(Human-In-The-Loop ),提升复杂问题解决成功率》。


          
# Define your agents with roles and goals
          
researcher = Agent(
          
  role='Senior Research Analyst',
          
  goal='Uncover cutting-edge developments in AI and data science in',
          
  backstory="""You are a Senior Research Analyst at a leading tech think tank.
          
  Your expertise lies in identifying emerging trends and technologies in AI and
          
  data science. You have a knack for dissecting complex data and presenting
          
  actionable insights.""",
          
  verbose=True,
          
  allow_delegation=False,
          
  # Passing human tools to the agent
          
  tools=[search_tool]+human_tools
          
)
      

另外,为了节省token费用以及隐私性,还可以对接本地LLM平台Ollama,有关Ollama介绍参看《 一文探秘LLM应用开发(17)-模型部署与推理(框架工具-ggml、mlc-llm、ollama) 》,安装,配置完成后可通过以下方式集成,Ollama上有很多本地模型,推荐使用当红的mistral模型。


          
from langchain.llms import Ollama
          
ollama_openhermes = Ollama(model="openhermes")
          
# Pass Ollama Model to Agents: When creating your agents within the CrewAI framework, you can pass the Ollama model as an argument to the Agent constructor. For instance:
          

          
local_expert = Agent(
          
  role='Local Expert at this city',
          
  goal='Provide the BEST insights about the selected city',
          
  backstory="""A knowledgeable local guide with extensive information
          
  about the city, it's attractions and customs""",
          
  tools=[
          
    SearchTools.search_internet,
          
    BrowserTools.scrape_and_summarize_website,
          
  ],
          
  llm=ollama_openhermes, # Ollama model passed here
          
  verbose=True
          
)
      

下面是一个完整基于Ollama的CrewAI的例子视频:

更多官方案例: https://github.com/joaomdmoura/crewAI-examples

0
0
0
0
评论
未登录
暂无评论