A practical view on agentic AI and why we think MCP is not solving a relevant problem.

Yes, in the current AI hype discourse this statement almost feels like suicide, but I want to briefly explain why we at HybridAI came to the conclusion not to set up or use an MCP server for now.

MCP servers are a (currently still “desired”) standard developed and promoted by Anthropic, which is currently gaining a lot of traction in the AI community.

An MCP server is about standardizing the tool calls (or “function calls”) that are so important for today’s “agentic” AI applications – specifically, the interface from the LLM (tool call) to the external service or tool interface, usually some REST API.

With the current ChatGPT image engine generated – I love these trashy AI images a little and will miss them…

At HybridAI, we have long relied on a strong implementation of function calls. We can look back on a few dozen implemented and production-deployed function calls, used by over 450 AI agents. So, we have some experience in this field. We also use N8N for certain cases, which adds another relevant layer in practice. Our agents also expose APIs to the outside world, so we know the problem in both directions (i.e., we could both set up an MCP server for our agents and query other MCPs in our function calls).

So why don’t I think MCP servers are super cool?

Simple: they solve a problem that, in my opinion, barely exists and leave the two much more important problems of function calls and agentic setups unsolved.

First: Why does the problem of needing to standardize foreign tool APIs hardly exist? Two reasons. (1) Existing APIs and tools usually have REST APIs or similar, meaning they already use a standardized interface. These are quite stable, which you can tell from API URLs still using “/v1/…” or “/v2/…”. They remain stable and accessible for a long time. Older APIs are often still relevant – like those of the ISS, the European Patent Office, or some city’s Open Data API. These services won’t offer MCP interfaces anytime soon – so you’ll have to deal with those old APIs for a long time. (2) And this surprises me a bit given the MCP hype: LLMs are actually pretty good at querying old APIs – better than other systems I’ve seen. You just throw the API output into the LLM and let it respond. No parsing, no error handling, no deciphering XML syntax. The LLM handles it reliably and fault-tolerantly. So why abstract that with MCP?

In reality, MCP adds another tech layer to solve a problem that isn’t that big in daily tool-calling.

The bigger issues are:

–> Tool selection

–> Tool execution and code security

Tool selection: Agentic solutions work by allowing multiple tools, sometimes chained sequentially, with the LLM deciding which to use and how to combine them. This process can be influenced with tool descriptions – small mini-prompts describing functions and arguments. But this can get messy fast. For example, we have a tool call for Perplexity when current events are involved (“what’s the weather today…”), but the LLM calls it even when the topic is just a bit complex. Or it triggers the WordPress Search API, though we wanted GPT-4.1 web search. It’s messy and will get more complex with increased autonomy.

Tool execution: A huge issue for scaling and security is the actual execution of tool code. This happens locally on your system. Ideally, at HybridAI, we’d offer customers the ability to submit their own code, which would be executed as tool calls when the LLM triggers them. But in terms of code integrity, platform stability, and security, that’s a nightmare (anyone who submitted a WordPress plugin knows what I mean). This issue will grow with more use of “operator” or “computer use” tools – as those also run locally, not at OpenAI.

For these two issues, I’d like ideas – maybe a TOP (Tool Orchestration Protocol) or a TEE (Tool Execution Environment). But hey.

Agentic Chatbots in SaaS – How HybridAI Makes Your App Smarter

SaaS platforms have long included help widgets, onboarding tours, and support ticket systems. But what if your app had a conversational layer that not only explained features – but also triggered them?

With HybridAI, this is now possible. Our system enables you to create agentic chatbots that speak your domain language, understand user intent, and call backend functions directly via Function Calls and Website Actions.

From Support Widget to Smart Assistant

Traditional support widgets are passive: they answer FAQs or forward tickets. A HybridAI bot, however, can do things like:

  • Trigger onboarding steps (“Show me how to create a new project”)
  • Fetch user data (“What was my latest invoice?”)
  • Execute actions (“Cancel my subscription”)

All of this is powered by safe, declarative function calls that you define – so you stay in control.

How It Works

  1. Define Actions: You provide a list of available operations (e.g. getUser, updateRecord, createInvoice) and their input parameters.
  2. Connect via API or Function-Call Interface: HybridAI receives these as tools it can call from natural language.
  3. Bot Instructs + Responds: The chatbot interprets the user prompt, selects a matching function, fills in parameters, and calls it.
  4. Real-Time Feedback: The user receives immediate confirmation or result, without ever leaving the chat.

Integration Benefits

  • No coding required to get started – Just define what your functions do.
  • Frontend or backend integration via JS events or APIs
  • Custom styling + voice – the bot looks like part of your product
  • Multi-language and context-aware – excellent for international SaaS

Use Cases

  • CRM assistants that update leads or pull sales data
  • Analytics bots that explain dashboards or alerts
  • HR bots that automate time-off requests
  • Support bots that resolve issues without agents

Ready to Try?

You can test HybridAI’s function-calling capability today with our Quickstart Bot – no sign-up required.

And if you’re ready to bring this into production, reach out to us – we’ll help you integrate HybridAI into your stack in days, not months.

Real-life use at school

This week we tested HybridAI for the first time in a real school environment. The students of Stadt-Gymnasium Köln-Porz had the opportunity to spend a German lesson with us under the guidance of Sven Welbers – on the wonderful topic: Grammar!

What could possibly be better!

It was genuinely exciting, as we configured HybridAI according to the teacher’s specifications to present a detective story that could only be solved step by step by completing grammar exercises. Since the stories were generated by the AI, each student had a unique version, with delightful variations even when new stories were generated.

Throughout the lesson, the bot provided feedback on progress and occasionally injected humorous messages.

Conclusion: The students certainly had a lot of fun! Not always guaranteed with such topics. The teacher was impressed by the educational quality of this lesson. Despite the dry material, the students appeared engaged and focused.

In the near future, we will develop further examples for the educational sector. The next session with a bot on the topic “Konjunktiv I and II” is already being prepared!

You can see the grammar bot in action here: