A practical view on agentic AI and why we think MCP is not solving a relevant problem.

Yes, in the current AI hype discourse this statement almost feels like suicide, but I want to briefly explain why we at HybridAI came to the conclusion not to set up or use an MCP server for now.

MCP servers are a (currently still “desired”) standard developed and promoted by Anthropic, which is currently gaining a lot of traction in the AI community.

An MCP server is about standardizing the tool calls (or “function calls”) that are so important for today’s “agentic” AI applications – specifically, the interface from the LLM (tool call) to the external service or tool interface, usually some REST API.

With the current ChatGPT image engine generated – I love these trashy AI images a little and will miss them…

At HybridAI, we have long relied on a strong implementation of function calls. We can look back on a few dozen implemented and production-deployed function calls, used by over 450 AI agents. So, we have some experience in this field. We also use N8N for certain cases, which adds another relevant layer in practice. Our agents also expose APIs to the outside world, so we know the problem in both directions (i.e., we could both set up an MCP server for our agents and query other MCPs in our function calls).

So why don’t I think MCP servers are super cool?

Simple: they solve a problem that, in my opinion, barely exists and leave the two much more important problems of function calls and agentic setups unsolved.

First: Why does the problem of needing to standardize foreign tool APIs hardly exist? Two reasons. (1) Existing APIs and tools usually have REST APIs or similar, meaning they already use a standardized interface. These are quite stable, which you can tell from API URLs still using “/v1/…” or “/v2/…”. They remain stable and accessible for a long time. Older APIs are often still relevant – like those of the ISS, the European Patent Office, or some city’s Open Data API. These services won’t offer MCP interfaces anytime soon – so you’ll have to deal with those old APIs for a long time. (2) And this surprises me a bit given the MCP hype: LLMs are actually pretty good at querying old APIs – better than other systems I’ve seen. You just throw the API output into the LLM and let it respond. No parsing, no error handling, no deciphering XML syntax. The LLM handles it reliably and fault-tolerantly. So why abstract that with MCP?

In reality, MCP adds another tech layer to solve a problem that isn’t that big in daily tool-calling.

The bigger issues are:

–> Tool selection

–> Tool execution and code security

Tool selection: Agentic solutions work by allowing multiple tools, sometimes chained sequentially, with the LLM deciding which to use and how to combine them. This process can be influenced with tool descriptions – small mini-prompts describing functions and arguments. But this can get messy fast. For example, we have a tool call for Perplexity when current events are involved (“what’s the weather today…”), but the LLM calls it even when the topic is just a bit complex. Or it triggers the WordPress Search API, though we wanted GPT-4.1 web search. It’s messy and will get more complex with increased autonomy.

Tool execution: A huge issue for scaling and security is the actual execution of tool code. This happens locally on your system. Ideally, at HybridAI, we’d offer customers the ability to submit their own code, which would be executed as tool calls when the LLM triggers them. But in terms of code integrity, platform stability, and security, that’s a nightmare (anyone who submitted a WordPress plugin knows what I mean). This issue will grow with more use of “operator” or “computer use” tools – as those also run locally, not at OpenAI.

For these two issues, I’d like ideas – maybe a TOP (Tool Orchestration Protocol) or a TEE (Tool Execution Environment). But hey.

Agentic Chatbot Controls Website

In the video, you can see how a HybridAI ChatBot begins to step out of its chat box and starts controlling elements on the embedding website.

While this is not yet “agentic” in the way many imagine, it is a very pragmatic step from an AI chatbot that only talks to one that can actually take action. The value of website chatbots in customer interactions increases significantly with such functionality.

The Rise of Action-Oriented Chatbots in 2025

(Why This Year Marks the Great Leap from Conversation to Execution)

Chatbot Evolution Timeline
1960s: ELIZA (Rudimentary NLP) 1980s–2000s Rule-based Chatbots (Scripts & IF/THEN) 2010s–2022s Deep Learning Chatbots (Transformers & NLP) Future Agentic Systems (Autonomous & Action)

For decades, chatbots have been defined by their ability to converse. In the earliest days—dating back to the 1960s with ELIZA—they served mostly as novelty acts, reflecting user input through simple, scripted replies. Then came rule-based systems in the 1980s, followed by the deep-learning chatbots we rely on today. But 2025 is shaping up to be a watershed moment for chatbots: they are no longer just talking; they’re starting to take action on our behalf.


A Shift Beyond Conversation

Until recently, even the most advanced chatbots focused on interpreting user queries and offering relevant responses. Ask a chatbot what the weather is, and it gives you the forecast. Ask it for a recipe, and it might provide step-by-step instructions. These interactions improved dramatically thanks to deep learning and transformers, making conversation feel more natural. But fundamentally, they were still just “answer machines.”

Now, we’re witnessing the next evolution. Instead of limiting themselves to text-based chats, new-generation chatbots have the potential to perform tasks. Rather than just telling you the weather, they might turn on your smart heater. Rather than just suggesting a recipe, they could order your groceries from a partnering store. These systems are sometimes referred to as “agentic chatbots,” because they have the autonomy to act as an agent on your behalf.


Enter: HybridAI and Other Action-Oriented Systems

One prime example leading this charge is HybridAI. It’s designed to do more than talk: it can call specific API-actions during a conversation and even manipulate elements on the hosting web page if a user requests it. Imagine you’re browsing a shopping site and you ask the chatbot to add a particular item to your cart or apply a promotional code. Instead of replying with a link or instructions, the chatbot can just do it for you. This is a substantial leap from a typical conversation-only assistant.

HybridAI’s capabilities highlight a crucial point: people want chatbots that actually solve problems, not just talk about them. We’re seeing the dawn of chatbots that can handle everyday tasks—everything from scheduling calendar events to navigating complex enterprise workflows—at the user’s command.


The Hype Around “Agentic Systems”

The term “agentic systems” is currently a hot topic. Experts, tech enthusiasts, and enterprise leaders alike are buzzing about how AI-driven assistants may soon become fully autonomous, capable of orchestrating multiple APIs, services, and even hardware devices in the background. While these discussions are exciting, the reality is that it will take time to refine and scale these capabilities. Questions around reliability, security, and ethics must be addressed before chatbots gain wide autonomy across critical domains.

Nonetheless, 2025 is shaping up to be the Year of Chatbot Action, the tipping point where the first wave of agentic systems begins to enter mainstream use. We’ll see more prototypes and pilot programs adopting these features, proving the concept and building trust with end-users. Like every transformative technology, it won’t happen overnight. But it’s closer than many realize—and it’s sure to reshape how we interact with both the digital and physical worlds.


Why This Matters

The impact of action-capable chatbots is enormous. Businesses will gain efficiency by reducing repetitive workflows; end-users will enjoy seamless convenience in everyday tasks. If you think about it, the shift from just “talking” to “doing” echoes the broader trend in AI: we want collaborative, proactive, and truly helpful systems.

We might still be a few years out from fully autonomous agentic systems, but the seeds are planted. Tools like HybridAI show us the immediate possibilities—chatbots can learn your needs, integrate with apps you use, and execute tasks in real time. In short, the future is already making its way into the present. And if 2025 is indeed the “Year of Chatbot Action,” imagine how much further they’ll go by the end of this decade.

Exciting times lie ahead.