AI

Self-hosting a MCP Registry for discovery using modelcontextprotocol.io registry

5 min read

I’ve been having a lot of conversations with customers lately about how to improve governance around MCP (Model Context Protocol) servers when using AI-powered development tools like GitHub Copilot. Right now, most organizations fall into one of two camps:

  • MCP servers are completely disabled because there’s no way to control what developers can use.
  • Everything is wide open, like an all-you-can-eat buffet.

Neither approach is ideal. Why? One, disabling all of them limits the power of the AI-powered development tools. Two, with having everything wide open, one can install a MCP server which can run arbitrary code on its machine. If you’re using MCP servers, you should only add servers from trusted sources and double-check both the editor and server configurations before starting them.

So, how do you tighten governance? The good news is that GitHub has been working on this for a while. They’ve introduced new controls (see Internal MCP registry and allowlist controls for VS Code Insiders – GitHub Changelog and MCP registry and allowlist controls for VS Code Stable in public preview – GitHub Changelog).

Admins can now configure MCP registries and enforce allowlist policies right from the administrative interface. This means enterprises can control which MCP servers are available for install in IDEs, giving them much better governance and security.

Let’s take a step back and look at the types of MCP servers you can use today. There are two main options:

  • Local servers using STDIO transport
  • Local or remote servers using streamable-http (HTTP) transport

If all the MCP servers you plan to allow are remote and use the streamable-http type, you are in luck. You can easily leverage the MCP server functionality available in Azure API Center.

But here’s the catch: most real-world scenarios involve a mix of both STDIO and remote HTTP MCP servers. If you want a single registry (or inventory) that includes both types, you’ll need to host your own registry.

You have two choices:

  1. Build your own API using the MCP server specification for the discovery API. This gives you full control but requires time and effort.
  2. If that’s not feasible, the Model Context Protocol team provides a ready-to-use Docker container image that you can deploy and get started quickly.

The container image contains both the discovery and the publishing part.

Getting started

The setup below explains how to setup the registry locally in a development environment on your machine using Docker. The same can be accomplished using a production environment. The registry URL needs to be able to be accessed publicly.

For more information on the development environment of the registry, please read the documentation.

The Docker container image requires the use of PostgreSQL. You can run the container by running the following Docker command:

docker run –name postgres -p 5432:5432 -e POSTGRES_PASSWORD=mysecretpassword -d –network mcp-net postgres:latest  

Note that I am using a Docker network, as the containers will need to be able to communicate with each other. To create a Docker network, run the following command

docker network create mcp-net

Once the database is up and running, you will need to create a database inside Postgres. For this example, I am using the name mcp-registry.

You can use PgAdmin as a UI tool to connect and manage Postgres. You can run PgAdmin using Docker via the following command:

docker run –name pgadmin -d -p 8001:80 -e ‘PGADMIN_DEFAULT_EMAIL=admin@local.tld’
-e ‘PGADMIN_DEFAULT_PASSWORD=admin’ –name pgadmin –network mcp-net dpage/pgadmin4:latest

It will then be available at http://localhost:8001

You now need to prepare the seed data and the environment variables to run the registry container.

Preparing the environment variables

To run the container, you’ll need to provide several environment variables for configuration. Passing them all individually on the command line can get messy, so here’s a simpler approach:

Create a file with your environment variables in the format KEY=VALUE, and then use the --env-file parameter in your docker run command to load them. This keeps things organized and makes your setup easier to manage.

Some documentation for the available env variables can be found here and here. Below is a sample of my configuration:

Preparing the seed data

The Model Context Protocol team provides a way to populate data on start of the container using the MCP_REGISTRY_SEED_FROM environment variable. It will read the file and seed the database with it. In combination with MCP_REGISTRY_ENABLE_REGISTRY_VALIDATION, it can validate that the seed data is valid.

The data should be in JSON format and be an array of MCP server objects as per the schema described by the JSON MCP server schema. Below is an example of seed data:

If you keep the environment variable MCP_REGISTRY_SEED_FROM when you start the container, it will re-seed the data. Once you seeded the data, do comment that environment variable.

Running the registry container

To run the container you can now execute the Docker run command below:

This will make the registry available at http://localhost:8080. You can query the servers by navigating to http://localhost:8080/v0.1/servers.

Accessing the registry container publicly

If you’ve completed the steps above, you now have a local MCP Registry ready for discovery. The next question is: how do you connect this to your GitHub Administrative UI?

Here’s the key: you’ll need a publicly accessible URL for your registry. This allows GitHub Copilot (and the IDEs) to discover and restrict to only the MCP servers you have configured in your registry.

GitHub Administrative Interface, AI Controls, MCP

In my case, I used Microsoft Dev Tunnel. This allowed me to create a tunnel from my development machine to the internet. See this as it is creating a proxy for me. To create the tunnel, you can use the following commands:

All creation of dev tunnels requires either a Microsoft Entra ID, Microsoft, or GitHub account.

Once the tunnel is started, you can set the URL in the MCP Registry field in the GitHub Administration UI.

But Dom…

How I do add or delete servers once the registry is up and running?

When using the Model Context Protocol in read-only discovery mode, you can’t update servers directly. To make changes, you’d normally go through the publishing process, but since we’re not publishing packages here, you have a few options:

  1. Update the database manually
  2. Build your own interface
  3. Restart the container using the MCP_REGISTRY_SEED_FROM environment variable, pointing to a data version that includes your updates (for example, newly added server versions).
    Important note: If you leave this environment variable set, the next time the container starts, it will try to seed the data again, which can cause errors. So, make sure to remove or unset it after the initial update.

As usual, if you do believe there are better ways, do let me know, always happy to learn and share my findings with the community.

Happy MCP-ing!