Introducing MCP Audit

A real-time token profiler for MCP tools. See exactly what your AI agents are doing.

mcp ai-agents developer-tools launch

Today I’m releasing MCP Audit, a real-time token profiler for Model Context Protocol (MCP) tools.

The Problem

When building AI agents with Claude, I realised I had no visibility into what my MCP tools were actually doing. How many tokens was each tool call using? What was the latency? Was I wasting money on inefficient prompts?

MCP servers are powerful, but they’re also opaque. Every tool call sends data to an external process, and you have no idea what’s happening under the hood.

The Solution

MCP Audit is a simple proxy that sits between your agent and MCP servers. It intercepts every tool call and logs:

  • Token counts - Input and output tokens per call
  • Cost estimates - Based on current model pricing
  • Latency - How long each tool takes to respond
  • Request/response data - Full visibility into what’s being sent

All data stays local. No cloud, no accounts, no tracking.

Getting Started

Install with your preferred Python package manager:

pipx install mcp-audit

Then wrap your MCP server command:

mcp-audit -- npx @modelcontextprotocol/server-brave-search

That’s it. You’ll see real-time metrics as you interact with your agent.

What’s Next

I’m planning to add:

  • Export to JSON/CSV for analysis
  • Grafana dashboard integration
  • Cost alerts and thresholds
  • Multi-session comparison

Check out MCP Audit on GitHub or install it from PyPI.

Feedback welcome at hello@littlebearapps.com.

N

Nathan

Little Bear Apps