Person working with a computer in dim light

10 December 2025

AI-powered apps: do you need to train a model?

Is your company planning to build an application based on artificial intelligence, but you’re not sure where to start or how much it will cost? We’ll explain it for you!

Tech

Artificial intelligence is, without a doubt, the most polarizing phenomenon of recent years. The arrival of ChatGPT, the popular OpenAI chatbot launched three years ago, opened the doors to a new digital era. Just think that in its first five days, ChatGPT reached over one million users.


It’s no surprise, then, that companies in every sector now see artificial intelligence as a turning point for their business. But what’s the best way to approach AI? And what solutions are available to those who want to use it?


You can’t determine in advance which approach is the right one. Each project must be evaluated individually to understand which system is most appropriate. Still, there are some general guidelines that can help you get a sense of your needs.


Understanding the need to find the right AI solution for each product

The excitement around artificial intelligence often leads many companies to develop digital products with AI just to keep up with trends.


Yet, as users before being entrepreneurs or developers, how many of us have downloaded apps or used platforms just because some AI feature was added? Probably none.


For a product to work—AI included or not—it must first and foremost meet the users’ needs; it is on this basis that the choice of technologies should be made.


Artificial intelligence isn’t always necessary

Often, when people talk about artificial intelligence, they are referring to a Large Language Model (LLM), a type of AI trained with huge amounts of data for natural language processing.


In practice, this means these models can understand the way we communicate daily, including what we express implicitly, and respond appropriately.


For example, when we say, “Could you pass me the salt?”, even though it is phrased as a question, what we are actually expressing is a request, not doubt. It would be strange if the person we ask replied, “Yes, I could,” instead of handing over the salt. LLMs, therefore, are trained to understand when they should (metaphorically, of course) “pass the salt.”


First and foremost, the golden rule is that:

If natural language isn’t involved, you probably don’t need an LLM.

There are, in fact, many other alternatives that may be more suitable depending on the use case.


So, what are the possible scenarios?

  • When the problem can be solved with mathematics or logic, a classic algorithm is sufficient.
  • When structured data is involved, an LLM can be useful for interpreting the request, but a simple database query is often enough to extract the answer.
  • If the goal is to achieve a POC quickly, an LLM is helpful thanks to its versatility, but it will need to be optimized in later development stages.


For us at Mabiloft, the priority is not to use the trendiest technologies, but those that fit the context—technologies that address the core need of the application with minimal cost and maximum efficiency.


If you’re unsure whether AI is the right path for your product, talk to us. Our team of experts can guide you without preconceptions or ready-made answers to find the solution that fits your specific case.



What problems does AI solve?

So why is everyone so obsessed with artificial intelligence? Today, AI is the best solution for addressing some of the recurring challenges companies face.


Some examples of optimal AI solutions for businesses include:

  • Creating chatbots for customer support, available 24/7, able to follow company policies, and respond quickly.
  • Accurately retrieving answers based on data, such as company manuals, audio exchanged between colleagues, and documents from various sources.
  • Automating workflows, especially those based on the classification of items (like tickets, emails, etc.).


Thanks to these capabilities, artificial intelligence confirms itself as the innovation of the decade, if not of the coming century. But to truly benefit from it, AI should not be treated as a decorative addition to applications that work fine without it; instead, it should be leveraged to solve specific problems.


Do you want to explore real AI use cases in apps and find out how (and if) to integrate it into your next digital product? Read our related article on the blog!


Do you need to train a model to build an AI-powered app?

A common misconception that many clients bring to us at Mabiloft is that it is necessary or desirable to “train a model” to create an AI app. The truth is that this is rarely the best solution.


To integrate artificial intelligence, you don’t need to “create a brain from scratch.” To explain it to our clients, we use this analogy:

  • Starting a training from scratch is like training a doctor for twenty years. It’s a long and costly process, necessary when you want to build the foundation of knowledge from zero. This is essentially the process followed by Google or OpenAI to provide their LLMs—but fortunately, we can use models already trained by others.
  • Training an existing model on specific datasets, as in fine-tuning, is like sending a doctor to a specialization course. It’s useful, but only if the model needs to learn a particular style or jargon.
  • Finally, RAG (Retrieval Augmented Generation) is like giving the doctor an updated medical record with all possible test results while examining a patient. It improves the answers provided by the LLM by giving it external, domain-specific knowledge, such as company documents, internal procedures, or predefined FAQs.


Artificial intelligence is an extremely powerful tool, but only when used appropriately. Otherwise, it becomes a waste of time and money, without even guaranteeing a successful project outcome.


How to choose between fine-tuning and RAG

In practice, full model training is rarely used, and for non-experts, choosing between the other two options can be confusing. So, when should you use one or the other?


The key question is: “Do I want the AI to write like me, or do I want it to know what I know?”. In the first case, fine-tuning is needed to imitate a style. This is usually recommended in later stages, when more data is available to define the desired output.


In the second, more common case, simply providing specific knowledge is enough. RAG is therefore the most suitable process—and also the less costly of the two.


Approach and challenges in building an AI app: a dossier from our experience

At Mabiloft, we have frequently worked with AI. For example, we developed a knowledge base that allows uploading files, audio, and messages related to company projects, and interacting with a chatbot that not only provides precise answers about the project but even cites the exact sources from which the information was drawn.


But that’s not all: another product we are proud of is capable of classifying incoming emails using machine learning. This way, we automated the email sorting process, distinguishing collaboration requests from general informational messages.


And these are just a few (the ones we can reveal!) of the AI-related projects we have developed, but stay tuned for updates on our future products.


What we learned from working with AI

What lessons have we taken away from the AI projects we experimented with? First and foremost, the key is in managing the output. A high-quality response from an LLM must have these two characteristics:

  • Show sources: The risk of hallucinations is always present, so it’s best to show the user the sources of the responses. Have you noticed that Google’s AI Overview does the same? This allows users to verify for themselves whether the AI’s summary can be trusted.
  • Manage errors: Some questions may have no answer. In such cases, it’s better to provide an honest admission of uncertainty rather than fabricated information. Therefore, it’s important to allow for “no answer” responses and, instead, provide related topics as alternatives.
  • Use specialized agents: This means we prefer agents specialized in each sub-task rather than a single agent attempting every role. For example, one agent might interpret user intent, another perform research, and a third aggregate data and produce a response for the user.


On the technical side, alongside our consolidated stack, we use services that put privacy first. How is this achieved? The models we use do not reuse either user input or LLM output to train public models.


The main challenges, however, are token costs and response latency. These remain limitations of artificial intelligence today. Our solution is to use AI only where necessary: if certain steps can be performed with heuristics—i.e., traditional logic—we prefer this approach to filter requests and reduce the load on the model.


Are you thinking about developing a digital product that requires artificial intelligence? If our work has piqued your interest, or if you think you need help developing your idea, don’t hesitate to contact us with no obligation or cost. We will arrange an introductory call to share ideas and determine if we can do something for you.