LLM Tooling

Every functional, web, and workflow endpoint on a Microbus bus can be invoked as an LLM tool, with no extra code on the endpoint side. The framework reuses each microservice’s OpenAPI document as the tool description and dispatches the LLM’s tool calls back over the bus. There are two delivery paths:

  • Internal - in-process callers use the LLM core microservice’s Chat endpoint to drive a tool-calling loop with a chosen provider (Claude, ChatGPT, Gemini, or any custom provider that implements the Turn contract).
  • External - MCP-aware clients (Claude Desktop, Claude Code, and others) connect to the MCP portal, which exposes the same set of endpoints over the Model Context Protocol.

Both paths read from the same source of truth and apply the same filtering, so an operator who fixes a description in code or tightens requiredClaims on an endpoint sees the change reflected everywhere immediately.

OpenAPI Is the Description

Every connector serves an OpenAPI document on its built-in :888/openapi.json control endpoint. The document is rendered just-in-time from the connector’s live subscription map, scoped to function, web, and workflow features. Tasks, outbound events, framework infrastructure subscriptions, and other control endpoints are filtered out at this boundary.

The handler is actor-aware: operations whose requiredClaims the caller cannot satisfy are omitted from the document, and the response carries Cache-Control: private, no-store because the rendered output varies per caller. There is no separate “tools manifest” to keep in sync - the API description and the tool surface are the same artifact.

Per-field documentation flows in from two sources:

  • The endpoint’s godoc, with optional structured Input: and Output: sections that the generator reflects into per-argument descriptions.
  • jsonschema:"description=..." struct tags on custom request and response types.

These render once into the OpenAPI document and propagate uniformly to both the internal and external paths.

Two Kinds of Tools: Endpoints and Workflows

Microbus exposes two distinct shapes of work as tools, and an LLM can use either through the same tool-calling interface:

  • Traditional endpoints - functional (RPC) and web handlers. These are synchronous request/response calls that complete in a single bus round-trip, typically in milliseconds. The LLM gets back a value (or an HTTP response body) and continues the conversation.
  • Agentic workflows - multi-step processes orchestrated by Foreman, persisted to a SQL database, and able to run for minutes, hours, or days. Workflows support parallel fan-out and fan-in across tasks, human-in-the-loop interrupts, and automatic recovery from failures.

From the LLM’s point of view both shapes look the same: a tool with a name, a description, and an input schema. The model emits a tool call; some output comes back. The framework handles the difference under the hood:

  • Endpoint tools are dispatched as a single Microbus RPC. The result is the endpoint’s response.
  • Workflow tools are dispatched as a dynamic subgraph through Foreman. llm.core (or the MCP portal) starts a new flow under the calling conversation and waits for it to complete; the flow’s final state is returned as the tool’s output.

This means an LLM can drive a coherent process that mixes quick lookups (endpoint tools) with long-running, durable, multi-step procedures (workflow tools) - all without leaving the tool-calling loop and all under the same authorization, observability, and audit machinery as any other Microbus call.

A practical pattern: expose data lookups (InventoryLookup, CustomerProfile) as functional endpoint tools and expose decision processes (CreditApproval, IncidentTriage) as workflow tools. The LLM picks the right shape per turn, and the operator gets durability and human-in-the-loop support exactly where it matters.

Path 1 - Internal (LLM Core Service)

A microservice that wants an LLM to invoke other microservices calls llm.core’s Chat endpoint with the canonical URLs of the endpoints it wants to expose:

import (
    "github.com/microbus-io/fabric/coreservices/claudellm/claudellmapi"
    "github.com/microbus-io/fabric/coreservices/llm/llmapi"
)

messages := []llmapi.Message{{Role: "user", Content: "What is 3 + 5?"}}
toolURLs := []string{
    calculatorapi.Arithmetic.URL(),
    inventoryapi.LookupSKU.URL(),
}
messagesOut, usage, err := llmapi.NewClient(svc).Chat(
    ctx,
    claudellmapi.Hostname,
    claudellmapi.ModelHaiku45,
    messages,
    toolURLs,
    nil,
)

llm.core fetches each tool’s host :888/openapi.json in parallel, finds the matching operation, and converts its request-body JSON Schema into the tool schema the chosen provider expects. The bus’s actor JWT propagates with each fetch, so the OpenAPI handler returns the actor-filtered subset. The LLM only ever sees tools the actor is authorized to invoke.

When the model emits tool calls, llm.core dispatches each one as a normal Microbus request - same authorization, same tracing, same metrics. The actor is preserved end-to-end. Workflow tools are dispatched specially: llm.core runs them as dynamic subgraphs through Foreman rather than as a single RPC.

Endpoints from multiple microservices can be combined in one toolURLs slice. When two endpoints share an operation name, the first keeps the bare name and subsequent ones get _2, _3, … suffixes in argument order, so the LLM can disambiguate without the operator having to.

For multi-turn conversations that may exceed a single request’s time budget, the ChatLoop workflow runs the same flow as a durable agentic workflow.

Provider Neutrality

The provider argument is the hostname of any microservice that implements the Turn contract. Microbus ships with claude.llm.core, chatgpt.llm.core, and gemini.llm.core, but any custom provider that implements Turn and is added to the application can be selected per call. Switching providers is a one-line change at the call site, with no global “active provider” config to flip.

Path 2 - External (MCP Portal)

The MCP portal is a core microservice that speaks the Model Context Protocol on its public endpoint. MCP-aware clients (Claude Desktop, Claude Code, IDE extensions, and so on) connect to a single URL and:

  1. List the tools the connected actor is authorized to invoke.
  2. Call them by name with arguments the LLM constructs.

For tool listing and invocation, the portal builds the tool surface from the same per-microservice OpenAPI documents that path 1 uses. Authorization is enforced through the same requiredClaims filtering, so a user logged in with a roles.admin claim sees a different tool set than one without it. Tool names follow the same disambiguation rules as path 1.

Tool dispatch routes through the bus, picking up the verified caller identity, actor claim propagation, interservice ACL, and per-endpoint authorization just like any other Microbus call. An external LLM client cannot invoke tools its actor is not authorized to invoke, even if it knows their canonical URLs.

Actor-based Authorization

Tools inherit Microbus’s actor-based authorization model end-to-end. There is no separate “tool authorization” layer to configure - the same requiredClaims boolean expressions that gate endpoint calls from one microservice to another also gate them when an LLM is the caller.

The flow is the same on both paths:

  1. The request enters the bus carrying an actor JWT (the user the request is being made on behalf of). For path 1 the actor is whatever the calling microservice was already running under; for path 2 the actor is derived from the user’s authentication at the MCP portal’s edge.
  2. llm.core or the MCP portal fetches each candidate endpoint’s OpenAPI document on :888/openapi.json. The connector renders the document per caller: an operation whose requiredClaims the actor cannot satisfy is omitted entirely. Operations the actor cannot reach are not just hidden in the UI - they are absent from the JSON.
  3. The filtered set becomes the tool catalog the LLM sees. The model can only emit tool calls for operations the actor was already entitled to invoke.
  4. When the LLM emits a tool call, the dispatch goes back over the bus as a normal Microbus request. The actor JWT propagates automatically and requiredClaims are re-evaluated at the receiving endpoint, so even a carefully crafted out-of-band tool call cannot bypass authorization.

The two-layer enforcement (filter at discovery, re-check at dispatch) is intentional. The filter keeps the LLM from being tempted to call something it shouldn’t; the re-check guarantees that no bug or omission in the discovery layer can turn into a privilege escalation.

A practical consequence: the same Microbus deployment can expose very different tool surfaces to different users without any per-user configuration. A user with roles.admin sees admin-only tools; the same MCP portal serving a guest sees only the public ones. The framework keeps them aligned because both paths read from the same actor-filtered OpenAPI.

Tool Eligibility

Endpoint typeExposed as a tool?Notes
Functional (RPC)YesMost natural fit; arguments map directly to JSON Schema. Magic arguments (httpRequestBody, httpResponseBody, httpStatusCode) work as documented.
Web handlerYesTool input is the request body. The handler has direct access to the raw HTTP envelope.
WorkflowYesLLM core dispatches as a dynamic subgraph through Foreman. MCP portal invokes through the same path.
TaskNoTasks are workflow steps, not standalone callable units. Filtered out at the OpenAPI handler.
Outbound eventNoEvents are fan-out, not request/response. Filtered out at the OpenAPI handler.
Inbound event sinkNoSame reason - not a callable contract.
Control / infra (:888, :417, :428 framework)NoFramework-internal; not part of the public OpenAPI document.

Authorization (requiredClaims) is the per-caller filter on top of the eligibility table. An eligible endpoint is invisible to a caller whose actor cannot satisfy its requiredClaims.

Authoring tool-friendly Endpoints

The same code conventions that make an endpoint pleasant to call from another Go microservice also make it pleasant for an LLM to invoke. Both paths benefit from:

  • Structured Input: / Output: sections in godoc. Per-argument descriptions land in the OpenAPI document and from there in the LLM’s tool schema. See HTTP arguments for the convention.
  • jsonschema:"description=..." tags on custom struct types. Per-field descriptions propagate the same way for nested objects.
  • Specific requiredClaims. The actor-aware OpenAPI filter is what keeps untrusted callers from even seeing privileged tools, let alone calling them. Coarse claims widen the tool surface; narrow claims keep it scoped.
  • Conservative argument types. Strict types and well-bounded enums let the LLM construct valid calls more reliably than open-ended strings.

For step-by-step examples of using Chat and ChatLoop from microservice code, see LLM integration. For the package-level reference, see coreservices/llm.