The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for MCP Dev Summit North America to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration..
IMPORTANT NOTE: Timing of sessions and room locations are subject to change.
Sign up or log in to add sessions to your schedule and sync them to your phone or calendar.
As MCP deployments grow beyond a few tools, the failure mode isn’t the model—it’s the integration surface. Teams quickly accumulate many MCP servers, inconsistent authentication, duplicated “almost-the-same” tools, and no single place to apply policy, observe behavior, or onboard agents and new systems.
This talk introduces the MCP Gateway pattern: a single MCP entrypoint that federates multiple servers into curated tool surfaces for each agent, workflow, or IDE. Borrowing lessons from the API boom, we’ll show how to structure capabilities into layered building blocks—system access, reusable orchestration, and channel-specific experiences—so you avoid point-to-point spaghetti while keeping integrations composable.
You’ll see a reference architecture that separates front-door caller identity from downstream tool authorization (scoped OAuth or API keys), supports tool allowlists and LLM-facing usage guidance, and adds the controls teams need: routing, versioning, rate limits, audit logs, and end-to-end tracing. You’ll leave with a practical checklist for turning tool sprawl into a governed integration platform that stays interoperable as new agents, clients, and systems arrive.
Alex Salazar is the Co-Founder and CEO of Arcade.dev, the runtime for MCP that enables AI agents to securely take real actions across enterprise systems. He's solving the hardest problems standing between AI agent demos and production deployment: secure agent authorization, high-accuracy... Read More →
How MCP Can Be Used to Build Scalable, Secure, Cloud-Native Agentic Systems on AWS, Azure, and GCP
As enterprises adopt agentic AI, the need for scalable, secure, cloud-native architectures becomes critical. This session explores how the Model Context Protocol (MCP) enables agents to reliably connect with cloud services across AWS, Azure, and GCP using a unified, open standard. Attendees will learn architecture patterns for deploying agents on serverless runtimes and container platforms, strategies for scaling multi-agent workflows, and methods to enforce enterprise-grade security using IAM, secret management, VPC networking, and policy controls. The talk also covers best practices for integrating MCP agents with databases, storage, monitoring, and enterprise APIs, along with techniques for cost optimization and observability. By the end, participants will understand how MCP simplifies interoperability and provides a foundation for building robust, production-ready agentic systems across multi-cloud environments.
The Model Context Protocol (MCP) was designed for robust, cloud-based LLM interactions. However, the proliferation of Small Language Models (SLMs) and their deployment on resource-constrained edge devices (e.g., IoT, mobile) introduces critical challenges to the protocol's current specification. This talk provides a deep-dive into the necessary technical adaptations for MCP to thrive at the edge. We will explore: Context Window Optimization: Protocol-level strategies for efficient context serialization and deserialization to minimize latency and memory footprint on SLMs. Asynchronous Context Management: How to handle intermittent connectivity and power-saving modes on edge devices through novel MCP transport and state management mechanisms. Edge-Native Context Caching: A proposal for a lightweight, on-device context caching layer that adheres to the MCP specification while ensuring data freshness and integrity. Attendees will leave with a clear understanding of the current limitations and a roadmap for contributing to the MCP specification's evolution for the next generation of ubiquitous, context-aware edge AI.
Kierra Dotson is an AI Engineer specializing in the critical intersection of AI strategy, operations (AgentOps), and governance. With a strong background in Cloud Engineering, DevOps, and Data Architecture, she focuses on building scalable, reliable, and compliant AI systems. Kierra... Read More →
The Model Context Protocol enables AI assistants to interface with external tools and data sources, but most examples focus on high-level APIs and databases. This talk explores building a production MCP server that exposes low-level Linux kernel observability data to AI assistants, enabling natural language debugging of complex systems.
`scxtop` is an observability tool for Linux's new sched_ext extensible scheduler framework (https://github.com/sched-ext/scx/tree/main/tools/scxtop). By implementing MCP, it allows developers to ask questions like "Why is my application experiencing high scheduling latency?" and receive AI-driven analysis that correlates kernel tracing data, hardware topology, performance counters, and scheduler internals.
Daniel Hodges is a software engineer on the Linux team at Meta. He has previous worked in areas such a observability, profiling, and application performance testing.
Connecting an LLM to a database is the "Hello World" of agentic AI, but scaling that to production requires solving complex problems in security, context management, and reliability. You can't simply feed a 500-table schema into a context window and hope for the best. In this session, the creators of the MCP Toolbox for Databases (12.5k stars) break down the specific architecture required to give agents safe, high-fidelity access to your data. You will learn the patterns that power over 6 million monthly tool calls, including: Raw SQL vs. Semantic Abstraction: A framework for deciding when to give an agent raw query power vs. when to abstract logic into strict semantic tools. Safety & Governance: Implementing read-only guardrails, query validation, and "Human-in-the-Loop" friction points to prevent accidental data loss or injection risks. Reducing Hallucinations: How to format database metadata and column descriptions to drastically improve an agent's query accuracy.
Kurtis Van Gent is a MCP Core Maintainer and leads the MCP Transports Working Group. By day, he leads AI Ecosystems + Integrations for Google Cloud Databases and helped create MCP Toolbox for Databases.
Wenxin Du is a core maintainer of MCP Toolbox for Databases. She delivered the end-to-end implementation of Toolbox's end-user authorization system and integrated semantic search functionality into Toolbox.
As MCP adoption grows, a challenge emerges: how do you expose 100’s of tools from a single server without overwhelming agent context windows? This talk introduces an MCP tool discovery mechanism we’ve built that dynamically loads tools. The platform works with a single discovery meta-tool on initialization, server-side state management to track agent context, and leveraging streamed MCP’s notifications/tools/list_changed to push relevant tool sets mid-session. Agents declare their problem context (incident response, monitoring etc) and receive only the tools they need, when they need them. Attendees will learn how this pattern keeps context windows lean while maintaining access to a broad tool ecosystem, with real examples showing how a single MCP server can serve diverse agent use cases without tool overload.
Lilia Abaibourova is a product and engineering leader with 15 years of experience building and scaling developer platforms and AI-first tools at Amazon, Peloton, HBO, and Microsoft. At Amazon, she leads AI enablement for Prime Video engineers, delivering agentic assistants for design... Read More →
Before MCP, Arcade was building tools for LLM agents. We've shipped over 1,000 tools—first as native Arcade tools with our own protocol and eventually adopting MCP. The main lesson: the hard part isn't writing the code, it's finding the right abstraction.
Most MCP tools today are thin wrappers around APIs. `GET /users/{id}` becomes `get_user(id)`. But this creates a mismatch—LLMs reason about tasks ("find the customer who complained last week"), not endpoints. The question is: where should tools sit on the abstraction spectrum?
**Too low-level:** The agent needs to chain together many calls. Each step is a chance to fail, and the model has to maintain context across all of them. You're asking the LLM to be a programmer at runtime.
**Too high-level:** You end up enumerating every possible task as its own tool. This defeats the point of having a general-purpose agent and your tool schema balloons, eating context and degrading selection accuracy.
In this talk:
- The common pitfalls we see in MCP tool design - Our design philosophy for optimized tools - Multiple real-world use cases and the tools that work for them - Outlook on future tool development
Sam is the CTO and co-founder of Arcade.dev. Before starting Arcade, Sam lead the led the applied AI team at Redis responsible for the vector database offering. He is a avid OSS developer and has contributed on projects like Langchain, LlamaIndex, Chapel, DeterminedAI, and others... Read More →
Here's a scenario that might sound familiar: you've got ten MCP servers, which means ten client connections, ten auth flows, and ten different places where things can break. One reason teams end up in this mess is that each MCP server solves a real problem - so you add another one, and another, and suddenly you've got MCP sprawl.
Enter the MCP Gateway pattern.
In this talk, we'll walk through an architecture that aggregates multiple MCP backends behind a single unified interface. We'll cover the fun problems this creates - what happens when two backends expose tools with the same name? and show how declarative workflow composition lets you orchestrate multi-step operations across backends without writing custom wrapper code.
We'll demo a gateway unifying several backends and executing a workflow defined entirely in YAML. No magic, just patterns you can apply to your own infrastructure.
With this in mind, you'll leave with practical approaches to taming MCP sprawl while keeping your security policies consistent across the board.
Juan Antonio "Ozz" Osorio is a Mexican software engineer living in Finland. His background spans security for OpenStack, Kubernetes, and bare metal environments. Currently at Stacklok, he founded the ToolHive project and has been building MCP infrastructure, including supply chain... Read More →
As AI agents become integral to cloud-native architectures, they need a standardized way to discover capabilities available within Kubernetes clusters. Currently, agents must be pre-configured with MCP server endpoints and skill definitions, creating brittleness in dynamic environments where services scale and evolve continuously. This talk introduces a Kubernetes-native discovery service: a cluster-scoped registry that exposes both MCP servers and Skills through a unified API. By leveraging Kubernetes primitives like CRDs and proven service discovery patterns, we can make agent capabilities first-class citizens in any cluster. Attendees will learn how to implement a dynamic registry enabling agents to query available MCP servers by capability, discover registered Skills with their metadata, and handle lifecycle changes gracefully. We'll demonstrate a working implementation showing agents dynamically assembling their toolset based on cluster state. The registry treats MCP servers and Skills as complementary discovery targets. Whether you're running agents in production or just exploring MCP adoption, this talk provides a blueprint for building discoverable agent infrastructure.
Senior Specialist Solutions Architect at AWS leading Container solutions in the Worldwide Application Modernization (AppMod). He is experienced in distributed cloud application architecture, emerging technologies, open source, serverless, devops. kubernetes, gitops. He is CNCF Ambassador... Read More →