
17 March 2026
User Research: Questions, Methods, and Pitfalls to Avoid
Does user research feel like a huge effort with little return? Don’t worry: with the right approach, it’s easy to find the answers you’re looking for.
Have you ever tried a digital product with obvious usability issues and wondered how no one thought to fix them?
Sometimes the answer is simple: it’s hard to judge your own product and truly see what it needs. That’s why the best judges are the users themselves—the people who use the product every day and know both its strengths and its flaws.
When your app is losing users and you don’t know why, you have two options: guess, or talk to the people who actually use it. But that’s not easy—scattered data from anonymous reviews or spontaneous feedback doesn’t always help you understand where to act.
That’s why it’s essential to build a structured analysis aimed at understanding which choices are working and, more importantly, what needs improvement.
This is where user research comes in: involving real users to gain insights that guide product direction and priorities. But what’s the best way to get the answers you need?
At Mabiloft, we’ve conducted user research for both our clients and our internal products, and we’ve developed a method that makes user research:
- Fast
- Effective
- Data-driven
Here are some of our tips to optimize time while maximizing results.
User Research: Start with Questions, Not Answers
It may sound obvious, but it’s often overlooked: to get the answers you need, you first have to ask the right questions. What does that mean? If you question users without knowing what you’re trying to learn, you’ll inevitably fail to get meaningful answers.
Example. You run interviews with 30 users. The UX researcher asks a long list of questions and takes notes on every response. At the end, the data is aggregated—and the research stops there, with no action taken.
Research is only useful when it helps you make decisions. It’s not just about collecting data; it should lead to concrete actions, clarifying points such as:
- Why users abandon the application
- Why a specific flow is unused or dropped midway
- Which not-yet-developed feature is most appealing to users
Only when the answers clearly indicate the next steps can user research be considered successful.
Avoiding Bias in User Research
One of the biggest challenges when talking to users is unintentionally influencing their answers—or failing to approach them with an open mind. Teams know their product and often form assumptions about what isn’t working, but these don’t always match user perceptions.
Starting with preconceived ideas can easily lead to confirmation bias: focusing on information that supports your theory. For example, you might only consider reviews that highlight issues you already believe exist.
Example. Out of the last 100 app reviews, 2 users are excited about a “coming soon” feature. Meanwhile, 20 reviews complain about an existing feature that the team considers less central. Confirmation bias may push the team to prioritize the new feature instead of improving the one that’s already causing problems.
Another way bias creeps in is through the questions themselves. Leading open-ended questions—or worse, multiple-choice questions with limited options—can steer users toward predefined answers.
Example. Asking “How frustrating is the onboarding process?” suggests that the process is frustrating, even if the user didn’t perceive it that way. A more neutral way to ask would be: “How would you evaluate the onboarding process? Were there any unclear steps?”
If you’re unsure whether your questions might unintentionally guide users, we can help you refine them. Book a call with us and we’ll review your questions together.
Our Method: Start from Data
So far we’ve covered what to avoid—now let’s look at where to begin. For us, the answer is simple: any solid research starts with empirical evidence, meaning the data you already have.
When working with a digital product, this data often includes:
- Support tickets
- App store reviews
- Heatmaps and session replays
- Funnel analysis and drop-offs
- Event tracking and user queries
Having this foundation allows us to validate assumptions, identify weak points, and understand which flows are used—and which are ignored—so we can act more precisely.
It may seem counterintuitive: why gather new data through user research if we already have data? In reality, user research exists to expand and complement what we already know.
Example. Users engage with the product but drop off at the paywall. Data tells us what is happening, but only interviews and surveys can explain why: is it the price, lack of interest in premium features, or technical issues?
With a solid data base, we can:
- Formulate better questions
- Move beyond naive assumptions
- Focus on the “why,” once the “what” and “how” are clear
Interviews or Surveys: How to Choose
Even though this article covers user research broadly, there’s an important distinction we haven’t addressed yet: interviews versus surveys.
Both methods have their pros and cons, so there’s no one-size-fits-all answer.
Interviews allow direct interaction with users and the ability to shape questions based on previous answers. This provides more room to uncover insights that may not have been recognized as relevant during preparation.
However, interviews require more time and a certain level of skill from the UX researcher to keep the conversation focused on the desired points.
Surveys, on the other hand, provide a clear guide to the topics you want to cover. But for this very reason, they can limit the depth of user responses.
The time required to administer a survey is much shorter, which makes it easier to reach a larger audience. However, this can sometimes result in less reliable answers, given casually or without genuine engagement.
Ultimately, the choice of method should depend on which approach is more suitable for the type of insights you hope to gain and the resources you have available.
How To Structure User Research
Once you’ve defined your method, avoided common pitfalls, and identified your starting point, let’s take a closer look at what happens during user research. The two fundamental components for successful research are the participants and the questions asked.
Finding the Right Participants
Who should be interviewed in user research? The easiest way is to look where your users already are—for example, communities dedicated to your app.
At this stage, you simply need to contact community admins and ask them to post a message. The message itself should be carefully crafted: maintain a human tone that doesn’t make it look like spam, but rather a genuine request to help improve the product.
If participation is particularly demanding (e.g., a 30-minute call), it can be helpful to offer a small incentive, such as a discount on the next subscription.
For apps with a smaller audience and no large communities, any user could potentially be suitable for an interview.
However, it’s important to profile participants: have they been using the app for a long time? Do they use all its features? These questions are essential because user feedback will naturally be influenced by their characteristics.
Finally, a common mistake to avoid is interviewing friends or acquaintances. While they may be the easiest to reach and convince, their opinions are rarely unbiased.
Asking the Right Questions
Questions are the other critical element in user testing. As mentioned, it’s important to come prepared with the questions you want answers to—but you can’t simply ask: “So, what should the next feature be?”
Effective questions share these characteristics:
- They focus on friction points, not opinions. Asking “Do you like…?” is rarely useful.
- A single question rarely uncovers the core of an issue; constantly changing topics prevents a complete understanding.
- Questions should not lead the user toward expected answers, but allow them to explore what matters most to them.
Sometimes, especially with online surveys, user responses may appear random. How can you ensure decisions are not based on careless answers?
The best method is to include control questions—one or two questions that mimic an earlier response but are phrased slightly differently. If the answers are inconsistent, the survey can be considered unreliable and discarded.
Finally, one last tip: the question that unlocks the most valuable insights, asked toward the end of the interview when the user is comfortable: “If you had a magic wand, what would you change immediately?”
This question allows users to express their true priorities, highlighting what matters most to them, and provides the most relevant information you can gather.
The Ideal Outcome: A Case Study
What should we aim to achieve with user research? Concretely, the desired outputs are:
- 5–8 new product insights
- At least 3 clear decisions on what to do and what not to do
- A backlog of at least 10 tasks with assigned priorities
- Clarity on the next sprint or the upcoming 1–2 weeks of work
More broadly, the ideal outcome is one that enables us to make a decision that can change the course of our application.
This is exactly what happened with one of our clients, who approached us to understand why a core AI-driven feature in their app wasn’t converting as expected.
At Mabiloft, we conducted user interviews alongside them, which revealed a surprising yet enlightening truth: for users, the AI wasn’t a benefit—it actually created distrust.
The result? By following the insights from user research, the client revised the flow, reducing the prominence of AI in the process. This led to new user acquisition and higher satisfaction among existing users.
If you’re considering testing your application by engaging directly with your users, contact us for a consultation. We’ll guide you on what to test, how to recruit the right participants, and how to avoid common mistakes.
Reach out with no obligation—we can help you structure a 7-day roadmap, from defining questions to delivering actionable outcomes.







