A practical view on agentic AI and why we think MCP is not solving a relevant problem.

Yes, in the current AI hype discourse this statement almost feels like suicide, but I want to briefly explain why we at HybridAI came to the conclusion not to set up or use an MCP server for now.

MCP servers are a (currently still “desired”) standard developed and promoted by Anthropic, which is currently gaining a lot of traction in the AI community.

An MCP server is about standardizing the tool calls (or “function calls”) that are so important for today’s “agentic” AI applications – specifically, the interface from the LLM (tool call) to the external service or tool interface, usually some REST API.

With the current ChatGPT image engine generated – I love these trashy AI images a little and will miss them…

At HybridAI, we have long relied on a strong implementation of function calls. We can look back on a few dozen implemented and production-deployed function calls, used by over 450 AI agents. So, we have some experience in this field. We also use N8N for certain cases, which adds another relevant layer in practice. Our agents also expose APIs to the outside world, so we know the problem in both directions (i.e., we could both set up an MCP server for our agents and query other MCPs in our function calls).

So why don’t I think MCP servers are super cool?

Simple: they solve a problem that, in my opinion, barely exists and leave the two much more important problems of function calls and agentic setups unsolved.

First: Why does the problem of needing to standardize foreign tool APIs hardly exist? Two reasons. (1) Existing APIs and tools usually have REST APIs or similar, meaning they already use a standardized interface. These are quite stable, which you can tell from API URLs still using “/v1/…” or “/v2/…”. They remain stable and accessible for a long time. Older APIs are often still relevant – like those of the ISS, the European Patent Office, or some city’s Open Data API. These services won’t offer MCP interfaces anytime soon – so you’ll have to deal with those old APIs for a long time. (2) And this surprises me a bit given the MCP hype: LLMs are actually pretty good at querying old APIs – better than other systems I’ve seen. You just throw the API output into the LLM and let it respond. No parsing, no error handling, no deciphering XML syntax. The LLM handles it reliably and fault-tolerantly. So why abstract that with MCP?

In reality, MCP adds another tech layer to solve a problem that isn’t that big in daily tool-calling.

The bigger issues are:

–> Tool selection

–> Tool execution and code security

Tool selection: Agentic solutions work by allowing multiple tools, sometimes chained sequentially, with the LLM deciding which to use and how to combine them. This process can be influenced with tool descriptions – small mini-prompts describing functions and arguments. But this can get messy fast. For example, we have a tool call for Perplexity when current events are involved (“what’s the weather today…”), but the LLM calls it even when the topic is just a bit complex. Or it triggers the WordPress Search API, though we wanted GPT-4.1 web search. It’s messy and will get more complex with increased autonomy.

Tool execution: A huge issue for scaling and security is the actual execution of tool code. This happens locally on your system. Ideally, at HybridAI, we’d offer customers the ability to submit their own code, which would be executed as tool calls when the LLM triggers them. But in terms of code integrity, platform stability, and security, that’s a nightmare (anyone who submitted a WordPress plugin knows what I mean). This issue will grow with more use of “operator” or “computer use” tools – as those also run locally, not at OpenAI.

For these two issues, I’d like ideas – maybe a TOP (Tool Orchestration Protocol) or a TEE (Tool Execution Environment). But hey.

The Beauty of Function-Calling

A key focus of HybridAI is “Function Calling”, which allows the AI to request other APIs and services in the background in specific situations and seamlessly integrate the output into the conversation. This can involve a wide range of tasks, such as querying weather services, stock market data, inventory systems, or even something as simple as retrieving energy prices for the next 24 hours—helping the user decide the most energy-efficient (and environmentally friendly) time to run their washing machine.

Or maybe you just want to know where the International Space Station is located at the moment…

As seen in the example with the Google Maps link, the AI can retain and reuse information in the ongoing conversation, even reformatting it when needed.

But of course, we wanted to take things one step further—because what would be the most logical function call for AI enthusiasts?
Naturally, the AI calling another AI!

This idea isn’t far-fetched at all, as there are now highly specialized AI systems, like Perplexity, which excels in retrieving real-time information.
So, let’s put it to the test—how many people protested against the AfD in Berlin yesterday?

Keep in mind, we’re currently chatting with ChatGPT 4.o-mini—an OpenAI model that is based on information from two years ago. This means it cannot know the answer to our question about the latest protests against the AfD in Berlin.

But that’s exactly why Function Calling is so powerful!

And of course, we can also ask about the current weather—for example, what’s it like right now in Auchenshuggle? 🚀

Cool, isn’t it?

If you have more ideas for function-calls or APIs that we should integrate, let us know. Or you just get your own HybridAI-Account for free here: https://hybridai.one/register

7 Things a Website Chatbot Should Be Able to Do in 2025

In the digital world of 2025, a website chatbot is no longer just a nice-to-have feature but an essential tool to improve customer experiences and streamline business processes. But what makes a truly great chatbot? Here are seven things a modern website chatbot must be able to do in 2025:

1. Provide Deeplinks to the Website

A chatbot should be able to independently crawl the website and extract relevant links. This allows it to respond directly to queries like “Where can I find the return policy?” or “Show me the latest offers” with appropriate deeplinks. This saves users time and simplifies website navigation significantly.

The ChatBot has generated this deeplink by crawling the Website automatically

2. Utilize Website Functions with Function Calls

Modern chatbots must seamlessly interact with website features. For example, users should be able to check the status of an order or initiate a return directly within the chat. This is enabled by function calls, allowing the chatbot to access APIs and other technical interfaces of the website.

3. Address Users in Their Language Automatically

A good chatbot recognizes the user’s preferred language and adapts accordingly. Whether the user speaks German, English, or another language, the chatbot should effortlessly start the conversation in the correct language. This function significantly improves the user experience and makes the chatbot globally applicable.

The ChatBot detects the users language from the Browser without asking and responds accordingly

4. Allow for Human Takeover at Any Time

Even the best chatbot sometimes reaches its limits. In such situations, it is essential that users can easily switch to speaking with a human agent. Even better, the chatbot should facilitate this transition smoothly by passing on all relevant information to the agent. AI-powered human takeover options can further optimize this process.

Sometimes it is important that a human agent takes over from the AI – this system detects a situation like that automatically and can call a human…
…which then smartly can pick it up and calm down the situation

5. Provide Information Based on Uploaded Materials

A truly versatile chatbot should be able to analyze uploaded materials such as product PDFs, price lists, or presentations and derive accurate information from them. This enables it to answer questions about technical specifications, pricing, or other details directly. This function is especially valuable in complex B2B scenarios.

Here you can see an example how deep the response of a Website ChatBot can be, if he has been trained with enough specific material (PDFs, other Websites etc.)

6. Multichannel Availability

Communication should take place where the user feels most comfortable. A modern chatbot is not only available on the website but also on channels like WhatsApp, Instagram, or Telegram – with the same functionality. This flexibility ensures that users can use the chatbot on their preferred platform without compromising performance.

In 2025 Users expect that they can communicate in the Channel/Tool that they like most.

7. Configuration with Different LLM Models

As AI models continuously evolve, a chatbot should be configurable with various large language models (LLMs). This allows businesses to benefit from advancements in AI technology or, if needed, use European AI models to meet data protection requirements and regional regulations.

Different LLM Models for Choice.

Conclusion

The demands on chatbots in 2025 are higher than ever. From smart navigation via deeplinks to multichannel functionality and the use of the latest AI technologies – a high-performing chatbot offers far more than just simple answers to standard questions. Companies that focus on these seven features can ensure they not only meet user expectations but are also future-proof.