Hacker News Clone new | comments | show | ask | jobs | submit | github repologin
Introducing The Model Context Protocol (www.anthropic.com)
92 points by benocodes 2 hours ago | hide | past | web | 26 comments | favorite





Hmm I like the idea of providing a unified interface to all LLMs to interact with outside data. But I don't really understand why this is local only. It would be a lot more interesting if I could connect this to my github in the web app and claude automatically has access to my code repositories.

I guess I can do this for my local file system now?

I also wonder if I build an LLM powered app, and currently simply to RAG and then inject the retrieved data into my prompts, should this replace it? Can I integrate this in a useful way even?

The use case of on your machine with your specific data, seems very narrow to me right now, considering how many different context sources and use cases there are.


> It would be a lot more interesting if I could connect this to my github in the web app and claude automatically has access to my code repositories.

From the link:

> To help developers start exploring, we’re sharing pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.


Yes but you need to run those servers locally on your own machine. And use the desktop client. That just seems... weird?

I’m glad they're pushing for standards here, literally everyone has been writing their own integrations and the level of fragmentation (as they also mention) and repetition going into building the infra around agents is super high.

We’re building an in terminal coding agent and our next step was to connect to external services like sentry and github where we would also be making a bespoke integration or using a closed source provider. We appreciate that they have mcp integrations already for those services. Thanks Anthropic!


I've been implementing a lot of this exact stuff over the past month, and couldn't agree more. And they even typed the python SDK -- with pydantic!! An exciting day to be an LLM dev, that's for sure. Will be immediately switching all my stuff to this (assuming it's easy to use without their starlette `server` component...)

@jspahrsummers and I have been working on this for the last few months at Anthropic. I am happy to answer any questions people might have.

For additional context the PyPi package: https://pypi.org/project/mcp/

And the GitHub repo: https://github.com/modelcontextprotocol


Do you have a roadmap for the future of the protocol?

Is it versioned? ie. does this release constitute an immutable protocol for the time being?


You can read how we're implementing versioning here: https://spec.modelcontextprotocol.io/specification/basic/ver...

It's not exactly immutable, but any backwards incompatible changes would require a version bump.

We don't have a roadmap in one particular place, but we'll be populating GitHub Issues, etc. with all the stuff we want to get to! We want to develop this in the open, with the community.


Followup: is this a protocol yet, or just a set of libraries? This page is empty: https://spec.modelcontextprotocol.io/

Sorry, I think that's just the nav on those docs being confusing (particularly on mobile). You can see the spec here: https://spec.modelcontextprotocol.io/specification/

Are there any resources for building the LLM side of MCP so we can use the servers with our own integration? Is there a specific schema for exposing MCP information to tool or computer use?

Both Python and Typescript SDK can be used to build a client. https://github.com/modelcontextprotocol/typescript-sdk/tree/... and https://github.com/modelcontextprotocol/python-sdk/tree/main.... The TypeScript client is widely used, while the Python side is more experimental.

In addition, I recommend looking at the specification documentation at https://spec.modelcontextprotocol.io. This should give you a good overview of how to implement a client. If you are looking to see an implemented open source client, Zed implements an MCP client: https://github.com/zed-industries/zed/tree/main/crates/conte...

If you have specific questions, please feel free to start a discussion on the respective https://github.com/modelcontextprotocol discussion, and we are happy to help you with integrating MCP.


Thanks! Do Anthropic models get extra training/RLHF/fine-tuning for MCP use or is it an extension of tool use?

Super cool and much needed open-standard. Wondering how this will work for websites/platforms that don't have exposed API's (LinkedIn, for example)

First, thank you for working on this.

Second, a question. Computer Use and JSON mode are great for creating a quasi-API for legacy software which offers no integration possibilities. Can MCP better help with legacy software interactions, and if so, in what ways?


Probably, yes! You could imagine building an MCP server (integration) for a particular piece of legacy software, and inside that server, you could employ Computer Use to actually use and automate it.

The benefit would be that to the application connecting to your MCP server, it just looks like any other integration, and you can encapsulate a lot of the complexity of Computer Use under the hood.

If you explore this, we'd love to see what you come up with!


Seems from the demo videos like Claude desktop app will soon support MCP. Can you share any info on when it will be rolled out?

Already available in the latest at https://claude.ai/download!

What is a practical use case for this protocol?

A few common use cases that I've been using is connecting a development database in a local docker container to Claude Desktop or any other MCP Client (e.g. an IDE assistant panel). I visualized the database layout in Claude Desktop and then create a Django ORM layer in my editor (which has MCP integration).

Internally we have seen people experiment with a wide variety of different integrations from reading data files to managing their Github repositories through Claude using MCP. Alex's post https://x.com/alexalbert__/status/1861079762506252723 has some good examples. Alternatively please take a look at https://github.com/modelcontextprotocol/servers for a set of servers we found useful.


You can use MCP with Sourcegraph's Cody as well

https://sourcegraph.com/blog/cody-supports-anthropic-model-c...


i am curious: why this instead of feeding your LLM an OpenAPI spec?

It's not about the interface to make a request to a server, it's about how the client and server can interact.

For example:

When and how should notifications be sent and how should they be handled?

---

It's a lot more like LSP.


makes sense, thanks for the explanation!

I think OpenAI spec function calls are to this like what raw bytes are to unix file descriptors

Same reason in Emacs we use lsp-mode and eglot these days instead of ad-hoc flymake and comint integrations. Plug and play.

Thank you for creating this.


Twitter doesn't work anymore unless you are logged in.

https://unrollnow.com/status/1861079762506252723




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: