My Other Publications: Around the Web #36, Fail First, Win Later; Working in the Storm.
[MCP Rundown] 🤖🖥⚙
Why MCP / MCP Architecture
I covered MCP around the time it was released late last year 2024. I have covered it at least once again afterwards, but since it seems it has blown up in the AI engineering community, and I wanted to learn about it more in-depth, I decided it’s worth dedicating a Pulse entry to it.
I started my learning by taking this MCP 101 course via Takeoff AI. As a python developer, the only downside to this course for me was that the author wrote the servers in Typescript, but it’s mostly a work through of the most important parts of the official documentation, so in that sense, it saves you some time (Also see the official code base). Another great resource is this (almost) 2 hours workshop on building agents with model context protocol by Mahesh Murag of Anthropic.
If you want to ask questions of the documentation, give the content of this webpage: https://modelcontextprotocol.io/llms-full.txt (which is the text file of the official documentation), as context to your favorite LLM.
In brief, The Model Context Protocol (MCP) is an open standard that simplifies LLMs access external tools and data. Similar to how USB-C offers a universal connection for devices, MCP provides a standardized way to link LLMs with local files, services, and remote APIs.
MCP supports building intelligent agents and workflows by defining three core components: Hosts (applications like Claude Desktop, IDE that request data), Clients (which manage direct communication), and Servers (lightweight programs that expose specific capabilities via MCP). These servers can securely access both local data—such as files or databases on your computer—and remote services like APIs.
The architecture is modular and follows a client-server model, enabling host applications to connect to multiple servers. MCP also comes with a growing set of pre-built integrations, allowing AI systems to interact with real-world information efficiently.
A key benefit of MCP is its vendor-agnostic design, letting developers switch between LLM providers while maintaining consistent workflows. By standardizing how context is provided to LLMs, MCP makes it easier to build secure, flexible, and intelligent AI-powered applications.
See example clients, and here is the repository of (some) servers (more below).
MCP has three core components: Resources, Prompts, and Tools, each serving a distinct role in enhancing AI capabilities.
Resources are structured, read-only data elements—such as documents, files, or database records—that provide contextual information to LLMs. They are managed by the host application, ensuring that the AI has access to relevant information.
Prompts are reusable templates or workflows defined by servers and surfaced to clients. They guide LLM behavior by providing standardized instructions or task-specific guidance. Users can select and apply these prompts as needed, facilitating consistent and efficient interactions between users and models.
Tools are executable functions exposed by MCP servers, enabling LLMs to perform actions like querying APIs, executing computations, or interacting with external systems. These tools are designed to be model-controlled, allowing AI models to invoke them automatically, often with human oversight for approval. By supporting a wide range of functionalities, tools extend the capabilities of AI agents in real-world applications.
Collectively, the 3 components form the foundational primitives of MCP.
Building MCP Servers
If you are looking to build an MCP server with the Python SDK, you can start here and here. For a more high-level orchestration, I believe FastMCP is the way to go.
There is also this very nice tutorial on building MCP servers with LLMs here: In brief, to build an MCP server with an LLM like Claude, first gather all relevant documentation—such as the full protocol text (linked here) and SDK README files—and share them with the model. Then, clearly specify what your server should do: the resources it will expose, tools it should provide, prompts it will use, and any external systems it must connect with.
MCP Servers
During my research, I stumbled on these two platforms: MCP.so and Composio for MCP servers. From my understanding, MCP.so is a platform for discovering and sharing MCP servers. On the other hand, Composio is a developer platform for plug-and-play AI tool integration e.g., they have an SDK.
Some Interesting Servers I have tried or on my list to try.
Exa Labs: Enables AI assistants like Claude to perform real-time web searches by connecting to the Exa AI Search API, providing up-to-date information retrieval capabilities. (GitHub)
HeyGen: Integrates HeyGen's avatar and video generation features into MCP-compatible clients, allowing AI applications to create personalized videos and avatars through the HeyGen API. (Docs)
Perplexity Ask: Connects AI assistants to Perplexity's Sonar API, facilitating live web-wide research and enabling real-time information access within the MCP ecosystem. (GitHub)
BeeMCP: Links Bee wearable data to AI assistants, allowing users to interact with their personal recordings, such as conversations and locations, through natural language queries. (GitHub)
Biomedical MCP Servers (PubMed, BioRxiv, ClinicalTrials.gov, DrugBank, OpenTargets):
Provide AI assistants with access to various biomedical databases, enabling tasks like searching for scientific articles, retrieving abstracts, and exploring clinical trial information. (GitHub)
Typefully: The Typefully MCP server enables AI assistants to create and schedule social media contentby integrating Typefully's drafting and publishing tools through the MCP. (Pipe dream)
See “MCP Claude that have full control on ChatGPT 4o to generate full storyboard in Ghibli style” Link.
Also, see how to monetize MCP servers with stripe.
[AI + Commentary] 📝🤖📰
[I]: 😋Agency is Eating the World
When I first used GPT-3 circa September 2022 (via the Playground), a few months before ChatGPT came out, it was clear to me that this is fundamentally a technology that's going to upend the state of affairs. This trait, now a great marker of success, rivals traditional skills or education in ways it has not in the past. Here is the best, most succinct essay I have read on this trait/ subject matter,
In his essay for The Industry, Gian Segato argues that Artificial Intelligence is triggering a fundamental economic shift where individual "agency"—the proactive will to act without explicit permission—is becoming more crucial than traditional skills or education. He notes the rise of hyper-lean companies achieving massive success with minimal staff, leveraging AI not to replace human ingenuity but to amplify it. Segato contends that while true agency involves defiance and improvisation, AI tools empower these driven individuals by drastically lowering the barrier previously imposed by the need for deep specialization. While expertise remains vital in high-risk sectors, AI makes advanced capabilities accessible across numerous fields, enabling generalists to build complex products and systems that once required large, specialized teams.
This trend, according to Segato, marks the "unraveling of credentialism," diminishing the value of formal qualifications in favor of a demonstrated bias toward action and achieving outcomes. Examples like Midjourney's high revenue-per-employee ratio and entrepreneurs pursuing solo billion-dollar ventures signal this structural change. While acknowledging the transition faces institutional resistance and potential operational chaos for solo founders, Segato emphasizes that agency is ultimately a state of mind. The core challenge, he suggests, is overcoming self-imposed limitations tied to old structures and believing in one's freedom to build and innovate in this new AI-augmented landscape.
[II]: 🪚On Jagged API: o3, Gemini 2.5 Pro, et al.
Ethan Mollick discusses the inherent difficulty in defining and measuring Artificial General Intelligence (AGI), especially with the advent of powerful new models like OpenAI's o3 and Google's Gemini 2.5 Pro. While these models exhibit impressive "agentic" capabilities—performing complex, multi-step tasks using tools and planning—prompting some observers like economist Tyler Cowen to suggest AGI is here, Mollick highlights their fundamentally uneven performance. He introduces his concept of the "Jagged Frontier," emphasizing that these AIs can achieve superhuman results on demanding tasks while simultaneously failing unexpectedly on simpler ones, making their competence powerful but inconsistent.
This "jaggedness" suggests we may have entered an era of "Jagged AGI," profoundly capable yet unreliable systems requiring careful human navigation, according to Mollick. He considers the implications for societal adoption: while major technologies typically integrate slowly, the independent nature of these agentic AIs might accelerate the process dramatically. Given the deep uncertainty about whether AI development will plateau, continue gradually, or take off rapidly, Mollick concludes that the most crucial task now is learning to adapt and work effectively within this unpredictable, jagged landscape of advanced AI capabilities.
[III]: ✌🏿️Vibe Coding Paradox
In his essay, Sangeet Paul Choudary introduces the "Vibe Coding Paradox," arguing that as AI dramatically lowers the cost and difficulty of execution—whether coding, writing, or designing—the value of that execution itself diminishes, I suppose it goes without saying. Drawing a parallel to Václav Havel navigating post-Soviet Europe's media explosion (where cheap expression made attention expensive), Choudary observes that AI enables faster production but often leads to a "productivity treadmill" where increased output lacks meaning and differentiation. When tasks that once conferred competitive advantage become easily replicable, simply doing more becomes counterproductive, drowning valuable signals in noise. The real scarcity, he argues, shifts from the ability to do things to discernment—knowing what is worth doing.
Choudary posits that in this new landscape shaped by AI-driven abundance, competitive advantage migrates to subtler qualities: meaningful restraint, careful craft, and developed taste. He cites examples like UNIQLO focusing on proprietary fabric science instead of chasing fast fashion cycles, and Shiseido investing in heritage and narrative when cosmetic formulations became commoditized. True defensibility, Choudary suggests, comes from encoding taste and restraint into coherent systems, as exemplified by Muji, whose minimalist aesthetic translates into operational efficiencies across sourcing, packaging, and merchandising. This system-level coherence, also seen in Studio Ghibli's insistence on craft over scale, creates lasting value and resilience that superficial, AI-generated outputs cannot replicate. He concludes that tinkering and strategic patience are now more valuable than simply blitzscaling execution.
[Screenshots] 📝🤖📰
🎙 Podcast on AI and GenAI
(Additional) podcast episodes I listened to over the past few weeks:
Please share this newsletter with your friends and network if you found it informative!