DocsMCP deployment
§ reference · operators

MCP deployment

Powerloom deploys Model Context Protocol (MCP) servers as first-class fleet citizens — scoped by organizational unit, audited per tool call, governed by the same RBAC model as agents themselves. This page covers the deployment shape, the template catalog, and how to author your own.


What an MCP deployment is

An MCP server exposes tools to agents. Postgres exposes query and mutate. Slack exposes send_message and read_channel. Files exposes read_file and write_file. The agent calls a tool; the MCP server runs it; the result returns.

In Powerloom, an MCP deployment is the running instance of a server, scoped to an OU. One MCPDeployment resource = one running container = one set of tools available to agents in that scope.

The reconciler manages the lifecycle. You declare the deployment in YAML, apply, and the reconciler stands the container up via Terraform, wires it into your VPC, registers its tools with the control plane, and starts streaming health checks.


Template catalog

Powerloom ships fifteen pre-built templates. Each is a versioned MCP server image with a defined tool surface, parameter schema, and credential requirements.

TemplateTools surfacedCommon use
postgresquery, mutate, describe_schemaAgent reads or writes a customer database
slacksend_message, read_channel, searchNotifications, triage agents
githublist_issues, comment, create_pr, list_filesCode review, issue triage
jiralist_tickets, create_ticket, transition, commentEngineering workflow agents
filesread, write, list, searchDocument agents over a shared filesystem
gmailsend, list, read, searchOutbound communication agents
s3get_object, put_object, list_objectsAsset pipelines
confluenceread_page, create_page, searchKnowledge agents
linearlist_issues, create_issue, update_statusIssue tracker agents
notionread_page, create_page, query_databaseKnowledge agents over Notion
bigqueryquery, describe_tableAnalytics agents
stripelist_charges, refund, lookup_customerFinance ops agents
hubspotlookup_contact, update_deal, create_noteCRM agents
datadogquery_logs, query_metrics, list_alertsSRE agents
pagerdutylist_incidents, acknowledge, escalateIncident response agents

Each template has its own parameter schema — the Postgres template needs a connection string, the GitHub template needs an installation ID, etc. weave describe template/<name> shows the full schema.


Declaring a deployment

apiVersion: powerloom/v1
kind: MCPDeployment
metadata:
  name: support-postgres
  scope_ou_path: /acme/support
  template: postgres
  parameters:
    connection_string_credential_ref: cred:/acme/support/support-readonly-db
    schema: public
  description: Read-only access to the support database

Three things are happening here:

  1. The deployment lives at /acme/support. Only agents in that OU or deeper can reach it. An agent in /acme/engineering cannot call its tools, even though the deployment exists in the same organization.

  2. The connection string is referenced, not embedded. connection_string_credential_ref points at a Credential resource. Powerloom never sees the raw value — it's stored as wrapped ciphertext under your tenant KMS key, decrypted only inside the MCP container at request time.

  3. The template determines the tool surface. postgres exposes query, mutate, and describe_schema. The deployment automatically registers these with the control plane on standup; agents in scope can call them.

weave apply -f deployment-postgres.yaml. The reconciler stands the container up (typically 30-60 seconds for the first deployment of a template, faster on subsequent ones). When status flips to healthy, the tools are live.


Scoping who can use it

By default, every agent in the same OU or below can call every tool the deployment exposes. To restrict further, write a per-tool RBAC binding:

apiVersion: powerloom/v1
kind: RoleBinding
metadata:
  principal_ref: agent:/acme/support/triage-bot
  role: ToolUser
  scope_resource: mcp-deployment:/acme/support/support-postgres
  decision_type: allow
  conditions:
    tool_names: [query, describe_schema]   # mutate excluded

The triage bot can read; it cannot write. The mutate tool stays unavailable to it even though the deployment exposes it. Other agents in the OU still have full access unless similarly narrowed.

For mutations specifically, a common pattern is to require a second approver:

apiVersion: powerloom/v1
kind: ApprovalPolicy
metadata:
  name: postgres-mutate-requires-approval
  scope_ou_path: /acme/support
  resource_kind: mcp-tool-call
  resource_match:
    tool_name: mutate
    deployment: support-postgres
  approver_role: OUAdmin
  ttl_seconds: 3600

Now any agent calling mutate on the support-postgres deployment lands in pending until an OUAdmin signs off. The audit trail records the request, the approver, the comment, and the eventual decision.


Health and observability

/mcp in the console shows every deployment in your org with its current status: pending, provisioning, healthy, degraded, failed. Click any deployment to see its tool list, recent invocations, and error rate.

The reconciler runs a health probe every 30 seconds. A degraded deployment fires a deploy_failed notification (per Phase 34) — owners get an in-app card, an email, and a push if they've opted in.

For raw observability — every tool call payload, every agent that invoked it, every result — see the audit log at /audit. Filter by resource_kind=mcp-tool-call and the deployment name.


Authoring custom templates

If none of the fifteen catalog templates fit, write your own. A custom template is:

  1. A container image that implements the MCP protocol (see modelcontextprotocol.io for the spec).
  2. A MCPTemplate manifest that declares its tool surface, parameter schema, and image reference.
apiVersion: powerloom/v1
kind: MCPTemplate
metadata:
  name: internal-orders-api
  display_name: Internal orders API
  image: ghcr.io/acme/orders-mcp:1.4.2
  parameter_schema:
    type: object
    required: [api_token_credential_ref, base_url]
    properties:
      api_token_credential_ref: { type: string, format: credential-ref }
      base_url: { type: string, format: uri }
  tool_schema:
    - name: lookup_order
      input_schema: { ... }
    - name: refund_order
      input_schema: { ... }

Apply the template once per environment. From then on, deployments reference it by template: internal-orders-api like any catalog template.


What this doesn't do

Powerloom doesn't proxy or rewrite tool payloads — when an agent calls query on the Postgres deployment, the SQL goes straight to your database. We govern who can call which tool and log what happened; we don't sit between the agent and the tool's runtime semantics.

Result handling, retry policy, and timeout behavior live in the MCP server image itself. If you need different policy than a template ships with, write a custom template that wraps it.