Official Swift SDK for the Model Context Protocol (MCP).
The Model Context Protocol (MCP) defines a standardized way for applications to communicate with AI and ML models. This Swift SDK implements both client and server components according to the 2025-03-26 (latest) version of the MCP specification.
- Swift 6.0+ (Xcode 16+)
See the Platform Availability section below for platform-specific requirements.
Add the following to your Package.swift file:
dependencies:[.package(url:"https://github.com/modelcontextprotocol/swift-sdk.git", from:"0.10.0")]Then add the dependency to your target:
.target( name:"YourTarget", dependencies:[.product(name:"MCP",package:"swift-sdk")])The client component allows your application to connect to MCP servers.
import MCP // Initialize the client letclient=Client(name:"MyApp", version:"1.0.0") // Create a transport and connect lettransport=StdioTransport()letresult=tryawait client.connect(transport: transport) // Check server capabilities if result.capabilities.tools !=nil{ // Server supports tools (implicitly including tool calling if the 'tools' capability object is present) }Note
The Client.connect(transport:) method returns the initialization result. This return value is discardable, so you can ignore it if you don't need to check server capabilities.
For local subprocess communication:
// Create a stdio transport (simplest option) lettransport=StdioTransport()tryawait client.connect(transport: transport)For remote server communication:
// Create a streaming HTTP transport lettransport=HTTPClientTransport( endpoint:URL(string:"http://localhost:8080")!, streaming:true // Enable Server-Sent Events for real-time updates )tryawait client.connect(transport: transport)Tools represent functions that can be called by the client:
// List available tools let(tools, cursor)=tryawait client.listTools()print("Available tools: \(tools.map{ $0.name }.joined(separator:", "))") // Call a tool with arguments let(content, isError)=tryawait client.callTool( name:"image-generator", arguments:["prompt":"A serene mountain landscape at sunset","style":"photorealistic","width":1024,"height":768]) // Handle tool content foritemin content {switch item {case.text(let text):print("Generated text: \(text)")case.image(let data,let mimeType,let metadata):iflet width =metadata?["width"]as?Int,let height =metadata?["height"]as?Int{print("Generated \(width)x\(height) image of type \(mimeType)") // Save or display the image data }case.audio(let data,let mimeType):print("Received audio data of type \(mimeType)")case.resource(let uri,let mimeType,let text):print("Received resource from \(uri) of type \(mimeType)")iflet text = text {print("Resource text: \(text)")}}}Resources represent data that can be accessed and potentially subscribed to:
// List available resources let(resources, nextCursor)=tryawait client.listResources()print("Available resources: \(resources.map{ $0.uri }.joined(separator:", "))") // Read a resource letcontents=tryawait client.readResource(uri:"resource://example")print("Resource content: \(contents)") // Subscribe to resource updates if supported if result.capabilities.resources.subscribe {tryawait client.subscribeToResource(uri:"resource://example") // Register notification handler await client.onNotification(ResourceUpdatedNotification.self){ message inleturi= message.params.uri print("Resource \(uri) updated with new content") // Fetch the updated resource content letupdatedContents=tryawait client.readResource(uri: uri)print("Updated resource content received")}}Prompts represent templated conversation starters:
// List available prompts let(prompts, nextCursor)=tryawait client.listPrompts()print("Available prompts: \(prompts.map{ $0.name }.joined(separator:", "))") // Get a prompt with arguments let(description, messages)=tryawait client.getPrompt( name:"customer-service", arguments:["customerName":"Alice","orderNumber":"ORD-12345","issue":"delivery delay"]) // Use the prompt messages in your application print("Prompt description: \(description)")formessagein messages {if case .text(text:let text)= message.content {print("\(message.role): \(text)")}}Sampling allows servers to request LLM completions through the client, enabling agentic behaviors while maintaining human-in-the-loop control. Clients register a handler to process incoming sampling requests from servers.
Tip
Sampling requests flow from server to client, not client to server. This enables servers to request AI assistance while clients maintain control over model access and user approval.
// Register a sampling handler in the client await client.withSamplingHandler{ parameters in // Review the sampling request (human-in-the-loop step 1) print("Server requests completion for: \(parameters.messages)") // Optionally modify the request based on user input varmessages= parameters.messages iflet systemPrompt = parameters.systemPrompt {print("System prompt: \(systemPrompt)")} // Sample from your LLM (this is where you'd call your AI service) letcompletion=tryawaitcallYourLLMService( messages: messages, maxTokens: parameters.maxTokens, temperature: parameters.temperature ) // Review the completion (human-in-the-loop step 2) print("LLM generated: \(completion)") // User can approve, modify, or reject the completion here // Return the result to the server returnCreateSamplingMessage.Result( model:"your-model-name", stopReason:.endTurn, role:.assistant, content:.text(completion))}The sampling flow follows these steps:
sequenceDiagram participant S as MCP Server participant C as MCP Client participant U as User/Human participant L as LLM Service Note over S,L: Server-initiated sampling request S->>C: sampling/createMessage request Note right of S: Server needs AI assistance<br/>for decision or content Note over C,U: Human-in-the-loop review #1 C->>U: Show sampling request U->>U: Review & optionally modify<br/>messages, system prompt U->>C: Approve request Note over C,L: Client handles LLM interaction C->>L: Send messages to LLM L->>C: Return completion Note over C,U: Human-in-the-loop review #2 C->>U: Show LLM completion U->>U: Review & optionally modify<br/>or reject completion U->>C: Approve completion Note over C,S: Return result to server C->>S: sampling/createMessage response Note left of C: Contains model used,<br/>stop reason, final content Note over S: Server continues with<br/>AI-assisted result This human-in-the-loop design ensures that users maintain control over what the LLM sees and generates, even when servers initiate the requests.
Handle common client errors:
do{tryawait client.connect(transport: transport) // Success }catchlet error as MCPError{print("MCP Error: \(error.localizedDescription)")}catch{print("Unexpected error: \(error)")}Configure client behavior for capability checking:
// Strict configuration - fail fast if a capability is missing letstrictClient=Client( name:"StrictClient", version:"1.0.0", configuration:.strict ) // With strict configuration, calling a method for an unsupported capability // will throw an error immediately without sending a request do{ // This will throw an error if resources.list capability is not available letresources=tryawait strictClient.listResources()}catchlet error as MCPError{print("Capability not available: \(error.localizedDescription)")} // Default (non-strict) configuration - attempt the request anyway letclient=Client( name:"FlexibleClient", version:"1.0.0", configuration:.default ) // With default configuration, the client will attempt the request // even if the capability wasn't advertised by the server do{letresources=tryawait client.listResources()}catchlet error as MCPError{ // Still handle the error if the server rejects the request print("Server rejected request: \(error.localizedDescription)")}Improve performance by sending multiple requests in a single batch:
// Array to hold tool call tasks vartoolTasks:[Task<CallTool.Result,Swift.Error>]=[] // Send a batch of requests tryawait client.withBatch{ batch in // Add multiple tool calls to the batch foriin0..<10{ toolTasks.append(tryawait batch.addRequest(CallTool.request(.init(name:"square", arguments:["n":Value(i)]))))}} // Process results after the batch is sent print("Processing \(toolTasks.count) tool results...")for(index, task)in toolTasks.enumerated(){do{letresult=tryawait task.value print("\(index): \(result.content)")}catch{print("\(index) failed: \(error)")}}You can also batch different types of requests:
// Declare task variables varpingTask:Task<Ping.Result,Error>?varpromptTask:Task<GetPrompt.Result,Error>? // Send a batch with different request types tryawait client.withBatch{ batch in pingTask =tryawait batch.addRequest(Ping.request()) promptTask =tryawait batch.addRequest(GetPrompt.request(.init(name:"greeting")))} // Process individual results do{iflet pingTask = pingTask {tryawait pingTask.value print("Ping successful")}iflet promptTask = promptTask {letpromptResult=tryawait promptTask.value print("Prompt: \(promptResult.description ??"None")")}}catch{print("Error processing batch results: \(error)")}Note
Server automatically handles batch requests from MCP clients.
The server component allows your application to host model capabilities and respond to client requests.
import MCP // Create a server with given capabilities letserver=Server( name:"MyModelServer", version:"1.0.0", capabilities:.init( prompts:.init(listChanged:true), resources:.init(subscribe:true, listChanged:true), tools:.init(listChanged:true))) // Create transport and start server lettransport=StdioTransport()tryawait server.start(transport: transport) // Now register handlers for the capabilities you've enabledRegister tool handlers to respond to client tool calls:
// Register a tool list handler await server.withMethodHandler(ListTools.self){ _ inlettools=[Tool( name:"weather", description:"Get current weather for a location", inputSchema:.object(["properties":.object(["location":.string("City name or coordinates"),"units":.string("Units of measurement, e.g., metric, imperial")])])),Tool( name:"calculator", description:"Perform calculations", inputSchema:.object(["properties":.object(["expression":.string("Mathematical expression to evaluate")])]))]return.init(tools: tools)} // Register a tool call handler await server.withMethodHandler(CallTool.self){ params inswitch params.name {case"weather":letlocation= params.arguments?["location"]?.stringValue ??"Unknown"letunits= params.arguments?["units"]?.stringValue ??"metric"letweatherData=getWeatherData(location: location, units: units) // Your implementation return.init( content:[.text("Weather for \(location): \(weatherData.temperature)°, \(weatherData.conditions)")], isError:false)case"calculator":iflet expression = params.arguments?["expression"]?.stringValue {letresult=evaluateExpression(expression) // Your implementation return.init(content:[.text("\(result)")], isError:false)}else{return.init(content:[.text("Missing expression parameter")], isError:true)}default:return.init(content:[.text("Unknown tool")], isError:true)}}Implement resource handlers for data access:
// Register a resource list handler await server.withMethodHandler(ListResources.self){ params inletresources=[Resource( name:"Knowledge Base Articles", uri:"resource://knowledge-base/articles", description:"Collection of support articles and documentation"),Resource( name:"System Status", uri:"resource://system/status", description:"Current system operational status")]return.init(resources: resources, nextCursor:nil)} // Register a resource read handler await server.withMethodHandler(ReadResource.self){ params inswitch params.uri {case"resource://knowledge-base/articles":return.init(contents:[Resource.Content.text("# Knowledge Base\n\nThis is the content of the knowledge base...", uri: params.uri)])case"resource://system/status":letstatus=getCurrentSystemStatus() // Your implementation letstatusJson="""{"status": "\(status.overall)","components":{"database": "\(status.database)","api": "\(status.api)","model": "\(status.model)" },"lastUpdated": "\(status.timestamp)" }"""return.init(contents:[Resource.Content.text(statusJson, uri: params.uri, mimeType:"application/json")])default:throwMCPError.invalidParams("Unknown resource URI: \(params.uri)")}} // Register a resource subscribe handler await server.withMethodHandler(ResourceSubscribe.self){ params in // Store subscription for later notifications. // Client identity for multi-client scenarios needs to be managed by the server application, // potentially using information from the initialize handshake if the server handles one client post-init. // addSubscription(clientID: /* some_client_identifier */, uri: params.uri) print("Client subscribed to \(params.uri). Server needs to implement logic to track this subscription.")return.init()}Implement prompt handlers:
// Register a prompt list handler await server.withMethodHandler(ListPrompts.self){ params inletprompts=[Prompt( name:"interview", description:"Job interview conversation starter", arguments:[.init(name:"position", description:"Job position", required:true),.init(name:"company", description:"Company name", required:true),.init(name:"interviewee", description:"Candidate name")]),Prompt( name:"customer-support", description:"Customer support conversation starter", arguments:[.init(name:"issue", description:"Customer issue", required:true),.init(name:"product", description:"Product name", required:true)])]return.init(prompts: prompts, nextCursor:nil)} // Register a prompt get handler await server.withMethodHandler(GetPrompt.self){ params inswitch params.name {case"interview":letposition= params.arguments?["position"]?.stringValue ??"Software Engineer"letcompany= params.arguments?["company"]?.stringValue ??"Acme Corp"letinterviewee= params.arguments?["interviewee"]?.stringValue ??"Candidate"letdescription="Job interview for \(position) position at \(company)"letmessages:[Prompt.Message]=[.user("You are an interviewer for the \(position) position at \(company)."),.user("Hello, I'm \(interviewee) and I'm here for the \(position) interview."),.assistant("Hi \(interviewee), welcome to \(company)! I'd like to start by asking about your background and experience.")]return.init(description: description, messages: messages)case"customer-support": // Similar implementation for customer support prompt default:throwMCPError.invalidParams("Unknown prompt name: \(params.name)")}}Servers can request LLM completions from clients through sampling. This enables agentic behaviors where servers can ask for AI assistance while maintaining human oversight.
Note
The current implementation provides the correct API design for sampling, but requires bidirectional communication support in the transport layer. This feature will be fully functional when bidirectional transport support is added.
// Enable sampling capability in server letserver=Server( name:"MyModelServer", version:"1.0.0", capabilities:.init( sampling:.init(), // Enable sampling capability tools:.init(listChanged:true))) // Request sampling from the client (conceptual - requires bidirectional transport) do{letresult=tryawait server.requestSampling( messages:[.user("Analyze this data and suggest next steps")], systemPrompt:"You are a helpful data analyst", temperature:0.7, maxTokens:150) // Use the LLM completion in your server logic print("LLM suggested: \(result.content)")}catch{print("Sampling request failed: \(error)")}Sampling enables powerful agentic workflows:
- Decision-making: Ask the LLM to choose between options
- Content generation: Request drafts for user approval
- Data analysis: Get AI insights on complex data
- Multi-step reasoning: Chain AI completions with tool calls
Control client connections with an initialize hook:
// Start the server with an initialize hook tryawait server.start(transport: transport){ clientInfo, clientCapabilities in // Validate client info guard clientInfo.name !="BlockedClient"else{throwMCPError.invalidRequest("This client is not allowed")} // You can also inspect client capabilities if clientCapabilities.sampling ==nil{print("Client does not support sampling")} // Perform any server-side setup based on client info print("Client \(clientInfo.name) v\(clientInfo.version) connected") // If the hook completes without throwing, initialization succeeds }We recommend using Swift Service Lifecycle for managing startup and shutdown of services.
First, add the dependency to your Package.swift:
.package(url:"https://github.com/swift-server/swift-service-lifecycle.git", from:"2.3.0"),Then implement the MCP server as a Service:
import MCP import ServiceLifecycle import Logging structMCPService:Service{letserver:Serverlettransport:Transportinit(server:Server, transport:Transport){self.server = server self.transport = transport }func run()asyncthrows{ // Start the server tryawait server.start(transport: transport) // Keep running until external cancellation tryawaitTask.sleep(for:.days(365*100)) // Effectively forever }func shutdown()asyncthrows{ // Gracefully shutdown the server await server.stop()}}Then use it in your application:
import MCP import ServiceLifecycle import Logging letlogger=Logger(label:"com.example.mcp-server") // Create the MCP server letserver=Server( name:"MyModelServer", version:"1.0.0", capabilities:.init( prompts:.init(listChanged:true), resources:.init(subscribe:true, listChanged:true), tools:.init(listChanged:true))) // Add handlers directly to the server await server.withMethodHandler(ListTools.self){ _ in // Your implementation return.init(tools:[Tool(name:"example", description:"An example tool")])}await server.withMethodHandler(CallTool.self){ params in // Your implementation return.init(content:[.text("Tool result")], isError:false)} // Create MCP service and other services lettransport=StdioTransport(logger: logger)letmcpService=MCPService(server: server, transport: transport)letdatabaseService=DatabaseService() // Your other services // Create service group with signal handling letserviceGroup=ServiceGroup( services:[mcpService, databaseService], configuration:.init( gracefulShutdownSignals:[.sigterm,.sigint]), logger: logger ) // Run the service group - this blocks until shutdown tryawait serviceGroup.run()This approach has several benefits:
- Signal handling: Automatically traps SIGINT, SIGTERM and triggers graceful shutdown
- Graceful shutdown: Properly shuts down your MCP server and other services
- Timeout-based shutdown: Configurable shutdown timeouts to prevent hanging processes
- Advanced service management:
ServiceLifecyclealso supports service dependencies, conditional services, and other useful features.
MCP's transport layer handles communication between clients and servers. The Swift SDK provides multiple built-in transports:
| Transport | Description | Platforms | Best for |
|---|---|---|---|
StdioTransport | Implements stdio transport using standard input/output streams | Apple platforms, Linux with glibc | Local subprocesses, CLI tools |
HTTPClientTransport | Implements Streamable HTTP transport using Foundation's URL Loading System | All platforms with Foundation | Remote servers, web applications |
InMemoryTransport | Custom in-memory transport for direct communication within the same process | All platforms | Testing, debugging, same-process client-server communication |
NetworkTransport | Custom transport using Apple's Network framework for TCP/UDP connections | Apple platforms only | Low-level networking, custom protocols |
You can implement a custom transport by conforming to the Transport protocol:
import MCP import Foundation publicactorMyCustomTransport:Transport{publicnonisolatedletlogger:LoggerprivatevarisConnected=falseprivateletmessageStream:AsyncThrowingStream<Data,anySwift.Error>privateletmessageContinuation:AsyncThrowingStream<Data,anySwift.Error>.Continuationpublicinit(logger:Logger?=nil){self.logger = logger ??Logger(label:"my.custom.transport")varcontinuation:AsyncThrowingStream<Data,anySwift.Error>.Continuation!self.messageStream =AsyncThrowingStream{ continuation = $0 }self.messageContinuation = continuation }publicfunc connect()asyncthrows{ // Implement your connection logic isConnected =true}publicfunc disconnect()async{ // Implement your disconnection logic isConnected =false messageContinuation.finish()}publicfunc send(_ data:Data)asyncthrows{ // Implement your message sending logic }publicfunc receive()->AsyncThrowingStream<Data,anySwift.Error>{return messageStream }}The Swift SDK has the following platform requirements:
| Platform | Minimum Version |
|---|---|
| macOS | 13.0+ |
| iOS / Mac Catalyst | 16.0+ |
| watchOS | 9.0+ |
| tvOS | 16.0+ |
| visionOS | 1.0+ |
| Linux | Distributions with glibc or musl, including Ubuntu, Debian, Fedora, and Alpine Linux |
While the core library works on any platform supporting Swift 6 (including Linux and Windows), running a client or server requires a compatible transport.
We're working to add Windows support.
Enable logging to help troubleshoot issues:
import Logging import MCP // Configure Logger LoggingSystem.bootstrap{ label invarhandler=StreamLogHandler.standardOutput(label: label) handler.logLevel =.debug return handler } // Create logger letlogger=Logger(label:"com.example.mcp") // Pass to client/server letclient=Client(name:"MyApp", version:"1.0.0") // Pass to transport lettransport=StdioTransport(logger: logger)This project follows Semantic Versioning. For pre-1.0 releases, minor version increments (0.X.0) may contain breaking changes.
For details about changes in each release, see the GitHub Releases page.
This project is licensed under the MIT License.