Advanced testing framework for Model Context Protocol (MCP) servers with enhanced error handling, performance testing, and comprehensive tool validation.
- Cleaner Repository: Moved development scripts from root to
scripts/directory - Smaller Package: Development scripts excluded from npm package
- Better Structure: Root directory now only contains essential files
- Comprehensive Tool Testing Examples: Complete documentation with real-world examples
- Enhanced Error Messages: Schema validation errors now include helpful suggestions
- Multi-Step Workflow Patterns: Examples showing how to chain tool tests together
- Common Assertion Patterns: 5 different assertion types with examples
- Custom error classes for better debugging (
MCPTestError,ConnectionError,TestTimeoutError) - Detailed error codes and contextual information
- Retry logic with configurable attempts and delays
- Performance Testing: Measure response times, concurrent handling, and memory stability
- Protocol Compliance: Validate JSON-RPC responses and error handling
- Error Handling Tests: Test server behavior with invalid inputs
- Enhanced Tool Testing: Schema validation, performance thresholds, and detailed assertions
- Detailed performance metrics (avg, min, max times)
- Test recommendations based on results
- Memory stability tracking
- Enhanced HTML-friendly reports
- More verbose and helpful error messages
- Configurable timeouts per test type
- Tool response previews
- Assertion tracking
npm install -g @robertdouglass/mcp-tester # Or use directly with npx npx @robertdouglass/mcp-tester --help# Auto-detect transport type (recommended for HTTP servers) mcp-tester auto http://localhost:3000/mcp --verbose # Test stdio server mcp-tester stdio node ./my-server.js --verbose # Test with specific transport mcp-tester streamableHttp http://localhost:3000/mcp --verbose# Run all test suites mcp-tester auto http://localhost:3000/mcp \ --verbose \ --performance \ --compliance \ --error-handling \ --timeout 60000 \ --retry 3const{ MCPTestFrameworkAdvanced }=require('@robertdouglass/mcp-tester');asyncfunctiontestMyServer(){constframework=newMCPTestFrameworkAdvanced({verbose: true,timeout: 30000,retryAttempts: 2,performanceThresholds: {toolCall: 2000,// Max 2s for tool callsdiscovery: 500// Max 500ms for discovery}});consttests={name: 'My Server Tests',testDiscovery: true,testStability: true,testPerformance: true,testProtocolCompliance: true,testErrorHandling: true,toolTests: [{toolName: 'my_tool',arguments: {input: 'test'},assertions: [async(result)=>{if(!result.content)thrownewError('No content');if(result.content[0].type!=='text'){thrownewError('Expected text content');}}]}],customTests: [{name: 'Custom validation',fn: async(client)=>{consttools=awaitclient.listTools();return{toolCount: tools.tools.length};}}]};awaitframework.testServer({type: 'stdio',command: 'node',args: ['./server.js']},tests);constreport=awaitframework.generateReport();framework.printSummary(report);}- Lists and validates tools, resources, and prompts
- Checks for required fields and schema completeness
- Measures discovery performance
- Rapid sequential requests (20 requests)
- Concurrent request handling (10 parallel)
- Memory stability over 50 iterations
- Response time variance analysis
- Tool discovery performance benchmarking
- Concurrent request handling metrics
- Individual tool execution timing
- Performance threshold validation
- JSON-RPC response format validation
- Error response structure verification
- Required field presence checks
- Invalid tool name handling
- Malformed argument handling
- Timeout behavior verification
- Connection failure recovery
- Input schema validation
- Custom assertion support
- Performance threshold checking
- Response structure validation
The most powerful feature of mcp-tester is testing individual MCP tools with custom arguments and assertions.
const{ MCPTestFrameworkAdvanced }=require('@robertdouglass/mcp-tester');asyncfunctiontestMyTool(){constframework=newMCPTestFrameworkAdvanced({verbose: true});awaitframework.testServer({type: 'streamableHttp',url: 'http://localhost:3000/mcp'},{name: 'My Tool Test',testDiscovery: false,testStability: false,toolTests: [{toolName: 'my_tool',arguments: {input: 'test data',format: 'json'},assertions: [async(result)=>{if(!result.content)thrownewError('No content returned');console.log('Tool result:',result.content[0].text);}]}]});}Test multiple tools in sequence, perfect for workflows like project creation:
asyncfunctiontestProjectWorkflow(){constframework=newMCPTestFrameworkAdvanced({verbose: true});// Step 1: List available serversawaitframework.testServer({type: 'streamableHttp',url: 'http://localhost:3000/mcp'},{name: 'Get Server List',toolTests: [{toolName: 'server_list',arguments: {output: 'json'},assertions: [async(result)=>{constdata=JSON.parse(result.content[0].text);console.log('Available servers:',data.length);}]}]});// Step 2: Create project using server from step 1awaitframework.testServer({type: 'streamableHttp',url: 'http://localhost:3000/mcp'},{name: 'Create Project',toolTests: [{toolName: 'project_create',arguments: {description: 'My New Project',serverId: 'server-id-from-step-1'},assertions: [async(result)=>{constresponse=JSON.parse(result.content[0].text);if(response.status!=='success'){thrownewError('Project creation failed');}console.log('Project created with ID:',response.data.projectId);}]}]});}// 1. Check response structureasync(result)=>{if(!result.content?.[0]?.text){thrownewError('Invalid response structure - no text content');}}// 2. Validate JSON responseasync(result)=>{constdata=JSON.parse(result.content[0].text);if(data.status!=='success'){thrownewError(`Operation failed: ${data.message}`);}}// 3. Performance assertionasync(result,metadata)=>{if(metadata.duration>5000){thrownewError(`Tool too slow: ${metadata.duration}ms > 5000ms`);}}// 4. Content validationasync(result)=>{consttext=result.content[0].text;if(!text.includes('expected-value')){thrownewError('Response missing expected content');}}// 5. Schema validationasync(result)=>{constdata=JSON.parse(result.content[0].text);constrequiredFields=['id','name','status'];for(constfieldofrequiredFields){if(!(fieldindata)){thrownewError(`Missing required field: ${field}`);}}}// Complete example testing Mittwald project operationsasyncfunctiontestMittwaldProjects(){constframework=newMCPTestFrameworkAdvanced({verbose: true,performanceThresholds: {toolCall: 3000}});consttests={name: 'Mittwald Project Management Tests',testDiscovery: false,toolTests: [// Test 1: List projects{toolName: 'mittwald_project_list',arguments: {output: 'json'},assertions: [async(result)=>{constdata=JSON.parse(result.content[0].text);console.log(`Found ${data.data.length} projects`);returndata.data;// Can return data for use in assertions}]},// Test 2: Get specific project details{toolName: 'mittwald_project_get',arguments: {projectId: 'your-project-id',output: 'json'},assertions: [async(result)=>{constproject=JSON.parse(result.content[0].text);if(!project.data.isReady){thrownewError('Project is not ready');}console.log(`Project "${project.data.description}" is ready`);}]},// Test 3: Create new project (commented out for safety)/*{ toolName: 'mittwald_project_create', arguments:{ description: 'Test Project', serverId: 'your-server-id' }, assertions: [ async (result) =>{ const response = JSON.parse(result.content[0].text); if (response.status === 'success'){ console.log('✅ Project created:', response.data.projectId); } else{ throw new Error('Project creation failed'); } } ] } */]};awaitframework.testServer({type: 'streamableHttp',url: 'http://localhost:3000/mcp'},tests);constreport=awaitframework.generateReport();framework.printSummary(report);}You can also test individual tools from the command line by creating test files:
# Create a test fileecho'module.exports ={toolTests: [{toolName: "my_tool", arguments:{}, assertions: [] }] }'> my-test.js # Run it (hypothetical - not implemented yet) mcp-tester auto http://localhost:3000/mcp --test-file my-test.js{verbose: false,// Detailed loggingtimeout: 30000,// Test timeout in msoutputDir: './test-results',// Report output directoryretryAttempts: 0,// Connection retry attemptsretryDelay: 1000,// Delay between retriesvalidateSchemas: true,// Validate tool input schemasperformanceThresholds: {toolCall: 5000,// Max tool call durationdiscovery: 1000// Max discovery duration}}The framework uses custom error classes for better debugging:
MCPTestError: Base error class with code and detailsConnectionError: Connection-specific errors with transport infoTestTimeoutError: Test timeout errors with test name and timeout value
Reports include:
- Comprehensive metrics (connection attempts, total tests, assertions)
- Per-transport breakdowns
- Individual test results with timings
- Performance insights
- Recommendations for improvements
- Failed test details with error codes
--verbose Show detailed output --timeout <ms> Set test timeout (default: 30000) --retry <attempts> Number of connection retries (default: 0) --performance Run performance tests --compliance Run protocol compliance tests --error-handling Run error handling tests --header "Key: Value" Add HTTP header --auth "token" Add auth token- Always run with --verbose during development to see detailed error messages
- Set appropriate timeouts for your server's expected performance
- Use retry logic for unreliable network conditions
- Write comprehensive assertions for tool tests
- Monitor performance thresholds to catch regressions
- Check server is running and accessible
- Verify transport type matches server implementation
- Use --retry flag for flaky connections
- Increase timeout with --timeout flag
- Check for server performance issues
- Verify network latency
- Ensure tool arguments match expected schema
- Disable validation with validateSchemas: false if needed
Contributions welcome! Please submit issues and PRs to: https://github.com/robertDouglass/mcp-tester
MIT © Robert Douglass