最简单易懂的MCP开发实战

火山方舟向量数据库大模型
最简单易懂的MCP开发实战

关于什么是MCP本文不做过多介绍,百度一搜一大堆,写此文的目的是看了几篇文章越看越模糊,对于用法一知半解。因此决定以真实案例驱动,做一个实践,来印证自己所想是否和现行技术匹配。
官方架构图

MCP Server部分是最好理解的,向客户端提供标准MCP协议。

我的疑问

而我对client和host这块是一知半解的,这俩组合是如何实现MCP和大模型联动的?是不是用了MCP就可以不再使用FunctionCalling了,在这顺便吐槽下国产大模型,我实测FunctionCalling没有一家能稳定用的,不是参数不对就是格式不对,不要跟我说90%可以,根本达不到。
看了几个Demo的结论是MCP协议可以转换为FunctionCalling用原来Function方式实现Ai 工具式的调用。
这个也很容易理解,毕竟都是json协议,加个转换器就能实现。
另外两种使用方式client和ide官方给的Demo是你这样配置,然后选就可以了,但是具体是如何交互的?如何实现的没有细节,比如Agent模式下,MCP调用是谁来完成的,大模型如果返回需要调用工具,是该谁来调?提示语如何拼接?如果不改前端任何代码,如何将MCP融合到现有的智能体中?

实践准备

SDK准备
根据个人喜好,选官方MCP协议的SDK,我选的是Rust版本的MCP协议:
https://github.com/modelcontextprotocol/rust-sdk
案例准备
以普通聊天为基础智能体,使用简单MCP服务,实现FunctionCall同方式的AI交互。
聊天版本示例
本地运行,使用ApiPost作为测试工具。
本地已有基础版本运行环境和框架不再单独提供代码示例,毕竟Rust在国内不是主流,独立搭不起来基础环境的可以单独加好友寻求帮助。
添加rust-sdk


 
 
 
 
   
rmcp = { git = "https://github.com/modelcontextprotocol/rust-sdk", features = [  
    "client",  
    "transport-child-process",  
    "transport-sse",  
], no-default-features = true }

实现方案

如果只是想了解流程对代码不感兴趣,可以直接参考下图

picture.image

MCP Server实现

这个体验为主,我直接使用了官方SDK Demo中的server


 
 
 
 
   
const BIND\_ADDRESS: &str = "127.0.0.1:8000";  
  
#[tokio::main]  
async fn main() -> anyhow::Result<()> {  
    tracing\_subscriber::registry()  
        .with(  
            tracing\_subscriber::EnvFilter::try\_from\_default\_env()  
                .unwrap\_or\_else(|\_| "debug".to\_string().into()),  
        )  
        .with(tracing\_subscriber::fmt::layer())  
        .init();  
  
    let config = SseServerConfig {  
        bind: BIND\_ADDRESS.parse()?,  
        sse\_path: "/sse".to\_string(),  
        post\_path: "/message".to\_string(),  
        ct: tokio\_util::sync::CancellationToken::new(),  
        sse\_keep\_alive: None,  
    };  
  
    let (sse\_server, router) = SseServer::new(config);  
  
    // Do something with the router, e.g., add routes or middleware  
  
    let listener = tokio::net::TcpListener::bind(sse\_server.config.bind).await?;  
  
    let ct = sse\_server.config.ct.child\_token();  
  
    let server = axum::serve(listener, router).with\_graceful\_shutdown(async move {  
        ct.cancelled().await;  
        tracing::info!("sse server cancelled");  
    });  
  
    tokio::spawn(async move {  
        if let Err(e) = server.await {  
            tracing::error!(error = %e, "sse server shutdown with error");  
        }  
    });  
  
    let ct = sse\_server.with\_service(Counter::new);  
  
    tokio::signal::ctrl\_c().await?;  
    ct.cancel();  
    Ok(())  
}

Counter是提供具体能力的工具,代码如下


 
 
 
 
   
#[derive(Debug, serde::Deserialize, schemars::JsonSchema)]  
pub struct StructRequest {  
    pub a: i32,  
    pub b: i32,  
}  
  
#[derive(Clone)]  
pub struct Counter {  
    counter: Arc<Mutex<i32>>,  
}  
#[tool(tool\_box)]  
impl Counter {  
    #[allow(dead\_code)]  
    pub fn new() -> Self {  
        Self {  
            counter: Arc::new(Mutex::new(0)),  
        }  
    }  
  
    fn \_create\_resource\_text(&self, uri: &str, name: &str) -> Resource {  
        RawResource::new(uri, name.to\_string()).no\_annotation()  
    }  
  
    #[tool(description = "Increment the counter by 1")]  
    async fn increment(&self) -> Result<CallToolResult, McpError> {  
        let mut counter = self.counter.lock().await;  
        *counter += 1;  
        Ok(CallToolResult::success(vec![Content::text(  
            counter.to\_string(),  
        )]))  
    }  
  
    #[tool(description = "Decrement the counter by 1")]  
    async fn decrement(&self) -> Result<CallToolResult, McpError> {  
        let mut counter = self.counter.lock().await;  
        *counter -= 1;  
        Ok(CallToolResult::success(vec![Content::text(  
            counter.to\_string(),  
        )]))  
    }  
  
    #[tool(description = "Get the current counter value")]  
    async fn get\_value(&self) -> Result<CallToolResult, McpError> {  
        let counter = self.counter.lock().await;  
        Ok(CallToolResult::success(vec![Content::text(  
            counter.to\_string(),  
        )]))  
    }  
  
    #[tool(description = "Say hello to the client")]  
    fn say\_hello(&self) -> Result<CallToolResult, McpError> {  
        Ok(CallToolResult::success(vec![Content::text("hello")]))  
    }  
  
    #[tool(description = "Repeat what you say")]  
    fn echo(  
        &self,  
        #[tool(param)]  
        #[schemars(description = "Repeat what you say")]  
        saying: String,  
    ) -> Result<CallToolResult, McpError> {  
        Ok(CallToolResult::success(vec![Content::text(saying)]))  
    }  
  
    #[tool(description = "Calculate the sum of two numbers")]  
    fn sum(  
        &self,  
        #[tool(aggr)] StructRequest { a, b }: StructRequest,  
    ) -> Result<CallToolResult, McpError> {  
        Ok(CallToolResult::success(vec![Content::text(  
            (a + b).to\_string(),  
        )]))  
    }  
}  
const\_string!(Echo = "echo");  
#[tool(tool\_box)]  
impl ServerHandler for Counter {  
    fn get\_info(&self) -> ServerInfo {  
        ServerInfo {  
            protocol\_version: ProtocolVersion::V\_2024\_11\_05,  
            capabilities: ServerCapabilities::builder()  
                .enable\_prompts()  
                .enable\_resources()  
                .enable\_tools()  
                .build(),  
            server\_info: Implementation::from\_build\_env(),  
            instructions: Some("This server provides a counter tool that can increment and decrement values. The counter starts at 0 and can be modified using the 'increment' and 'decrement' tools. Use 'get\_value' to check the current count.".to\_string()),  
        }  
    }  
  
    async fn list\_resources(  
        &self,  
        \_request: Option<PaginatedRequestParam>,  
        \_: RequestContext<RoleServer>,  
    ) -> Result<ListResourcesResult, McpError> {  
        Ok(ListResourcesResult {  
            resources: vec![  
                self.\_create\_resource\_text("str:////Users/to/some/path/", "cwd"),  
                self.\_create\_resource\_text("memo://insights", "memo-name"),  
            ],  
            next\_cursor: None,  
        })  
    }  
  
    async fn read\_resource(  
        &self,  
        ReadResourceRequestParam { uri }: ReadResourceRequestParam,  
        \_: RequestContext<RoleServer>,  
    ) -> Result<ReadResourceResult, McpError> {  
        match uri.as\_str() {  
            "str:////Users/to/some/path/" => {  
                let cwd = "/Users/to/some/path/";  
                Ok(ReadResourceResult {  
                    contents: vec![ResourceContents::text(cwd, uri)],  
                })  
            }  
            "memo://insights" => {  
                let memo = "Business Intelligence Memo\n\nAnalysis has revealed 5 key insights ...";  
                Ok(ReadResourceResult {  
                    contents: vec![ResourceContents::text(memo, uri)],  
                })  
            }  
            \_ => Err(McpError::resource\_not\_found(  
                "resource\_not\_found",  
                Some(json!({  
                    "uri": uri  
                })),  
            )),  
        }  
    }  
  
    async fn list\_prompts(  
        &self,  
        \_request: Option<PaginatedRequestParam>,  
        \_: RequestContext<RoleServer>,  
    ) -> Result<ListPromptsResult, McpError> {  
        Ok(ListPromptsResult {  
            next\_cursor: None,  
            prompts: vec![Prompt::new(  
                "example\_prompt",  
                Some("This is an example prompt that takes one required argument, message"),  
                Some(vec![PromptArgument {  
                    name: "message".to\_string(),  
                    description: Some("A message to put in the prompt".to\_string()),  
                    required: Some(true),  
                }]),  
            )],  
        })  
    }  
  
    async fn get\_prompt(  
        &self,  
        GetPromptRequestParam { name, arguments }: GetPromptRequestParam,  
        \_: RequestContext<RoleServer>,  
    ) -> Result<GetPromptResult, McpError> {  
        match name.as\_str() {  
            "example\_prompt" => {  
                let message = arguments  
                    .and\_then(|json| json.get("message")?.as\_str().map(|s| s.to\_string()))  
                    .ok\_or\_else(|| {  
                        McpError::invalid\_params("No message provided to example\_prompt", None)  
                    })?;  
  
                let prompt =  
                    format!("This is an example prompt with your message here: '{message}'");  
                Ok(GetPromptResult {  
                    description: None,  
                    messages: vec![PromptMessage {  
                        role: PromptMessageRole::User,  
                        content: PromptMessageContent::text(prompt),  
                    }],  
                })  
            }  
            \_ => Err(McpError::invalid\_params("prompt not found", None)),  
        }  
    }  
  
    async fn list\_resource\_templates(  
        &self,  
        \_request: Option<PaginatedRequestParam>,  
        \_: RequestContext<RoleServer>,  
    ) -> Result<ListResourceTemplatesResult, McpError> {  
        Ok(ListResourceTemplatesResult {  
            next\_cursor: None,  
            resource\_templates: Vec::new(),  
        })  
    }  
}

集成到chat中

以下是我的项目集成实现方式,仅供参考


 
 
 
 
   
pub struct McpTestStrategy;  
#[async\_trait::async\_trait]  
impl AgentStrategy for McpTestStrategy {  
    async fn execute(&self, req: &ChatCompletionRequest, \_: &UserInfo, \_: &[ChatCompletionRequestMessage], \_: bool) -> CommonResult<Response> {  
        let mcp\_config = McpConfig{ server: vec![McpServerConfig{ name: "mcp server".to\_string(), transport: McpServerTransportConfig::Sse { url: "http://localhost:8000/sse".to\_string() } }] };  
        let mcp\_clients = mcp\_config.create\_mcp\_clients().await?;  
        let mut tool\_set = ToolSet::default();  
        for (name, client) in mcp\_clients {  
            println!("loading mcp tools: {}", name);  
            let server = client.peer().clone();  
            let tools = get\_mcp\_tools(server).await?;  
  
            for tool in tools {  
                println!("adding tool: {}", tool.name());  
                tool\_set.add\_tool(tool);  
            }  
        }  
        let mut messages = Vec::new();  
        let mut system\_prompt =  
            "you are a assistant, you can help user to complete various tasks. you have the following tools to use:\n".to\_string();  
        for tool in tool\_set.tools() {  
            system\_prompt.push\_str(&format!(  
                "\ntool name: {}\ndescription: {}\nparameters: {}\n",  
                tool.name(),  
                tool.description(),  
                serde\_json::to\_string\_pretty(&tool.parameters()).unwrap\_or\_default()  
            ));  
        }  
        // add tool call format guidance  
        system\_prompt.push\_str(  
            "\nif you need to call tool, please use the following format:\n\  
        Tool: <tool name>\n\  
        Inputs: <inputs>\n",  
        );  
        messages.push(ChatUtil::system\_msg(system\_prompt)?);  
        messages.push(ChatUtil::user\_msg(req.messages.last().unwrap().content.to\_raw\_string())?);  
        let chat\_api\_config = ChatConfigManager::get\_chat\_model\_config();  
        let client = Client::with\_config(chat\_api\_config.config);  
        let result = ReasonerDomainService::chat(client,chat\_api\_config.model\_name,messages,ResponseFormat::Text).await?;  
        // 如果需要调用工具  
        let mcp\_call = ToolCall::from\_string(&result.content\_messages);  
        if let Some(mcp\_call) = mcp\_call {  
            let tool = tool\_set.get\_tool(&mcp\_call.name);  
            match tool {  
                None => {}  
                Some(mcp\_tool) => {  
                    let mcp\_result =  &mcp\_tool  
                        .call(json!(mcp\_call.arguments)).await?;  
                    info!("mcp call result: {:?}", mcp\_result);  
                    let tool\_call\_result = serde\_json::from\_str::<ToolResult2>(mcp\_result).expect("mcp序列化失败");  
                    let mut result = String::new();  
                    for content in tool\_call\_result.content {  
                        result.push\_str(&format!("{}{}",&content.text,"\n"));  
                    }  
                    return Ok(ApiOK(Some(result)).into\_response());  
                }  
            }  
        }  
        Ok(ApiOK(Some(result)).into\_response())  
    }  
}

测试


 
 
 
 
   
curl --request POST \  
  --url http://localhost:8080/v1/chat/completions \  
  --header 'Accept: */*' \  
  --header 'Accept-Encoding: gzip, deflate, br' \  
  --header 'Connection: keep-alive' \  
  --header 'Content-Type: application/json' \  
  --header 'User-Agent: PostmanRuntime-ApipostRuntime/1.1.0' \  
  --header 'requestid: 3248912354' \  
  --header 'token: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHBpcmVzSW4iOiI2MDQ4MDAiLCJzeXN0ZW1JZCI6IjYxOTUyNjAxMTM4IiwibG9naW5UeXBlIjoiMSIsInNoYXJlVG9rZW4iOiJmYWxzZSIsInVzZXJUeXBlIj...g2OCIsInRpbWVzdGFtcCI6IjE3NDAzNzk1MjM1NDUifQ.UN0nfnW4niTEQdsRiw5FAsXgRdmMZy3xaFOXeL2jXLo' \  
  --data '{  
    "biz\_code":"mcp\_test",  
    "stream": false,  
    "messages": [  
        {  
            "role": "user",  
            "content": "say\_hello"  
        }  
    ]  
}'

运行结果如下:


 
 
 
 
   
 mcp call result: "{\"content\":[{\"type\":\"text\",\"text\":\"1054\"}],\"isError\":false}"  

后记

时间关系没有将MCP和流式输出做一个集成,目前 MCP服务是sse方式输出的流,在chat中为了快速体验底层是流,中间收集了流的返回统一响应了。后续优化方向是集成到流响应中,以及参数缺失等校验结果可以直接返回给用户。另外我的计划是,将MCP服务直接在当前服务启动,而不是作为单独的服务。
整体体验比function call方式好太多了。
这样一个基于Rust打造的高性能企业级llm服务结合MCP生产力又上一层。
另外: MCP的出现让FunctionCall不那么重要了。DeepSeekR1 + MCP是目前最强的组合。

附:
我在构建企业级高性能LLM服务的技术选型

| 名称 | 技术栈 | 版本 | | 开发语言 | rust | 1.86 | | web框架 | axum | 0.81 | | http客户端 | reqwest | 0.12 | | 错误处理 | anyhow | 1.0 | | 鉴权 | jsonwebtoken | 9.3 | | ORM | sea-orm | 1.1 | | 缓存 | redis | 0.29 | | 异步框架 | tokio | 1.43 | | 日志框架 | tracing | 0.1.41 | | openapi | async-openai | 0.27 | | 模版变量 | handlebars | 6.3 | | 图片处理 | opencv | 0.94 |

欢迎关注我的公众号,互相学习、进步


0
0
0
0
关于作者
关于作者

文章

0

获赞

0

收藏

0

相关资源
大规模高性能计算集群优化实践
随着机器学习的发展,数据量和训练模型都有越来越大的趋势,这对基础设施有了更高的要求,包括硬件、网络架构等。本次分享主要介绍火山引擎支撑大规模高性能计算集群的架构和优化实践。
相关产品
评论
未登录
看完啦,登录分享一下感受吧~
暂无评论