Ultimate Guide to Mobile App Usability Testing
Usability testing is essential for creating mobile apps that are easy to use and meet user expectations, boosting retention and satisfaction.
Essential Designs Team
|
June 13, 2025

Want to build a mobile app users love? Start with usability testing.
Usability testing ensures your app is easy to navigate, enjoyable to use, and meets user expectations. Why does this matter?
- 80% of users delete apps that fail to meet expectations.
- 67% uninstall apps due to unclear navigation.
- Apps with better usability see higher retention rates and save money by catching issues early.
Here’s what you’ll learn in this guide:
- What usability testing is and why it’s critical for app success.
- Key steps: setting goals, finding the right users, and testing on real devices.
- Testing methods: moderated, unmoderated, remote, and in-person.
- Tools and strategies to analyze results and improve your app.
Bottom line: Usability testing isn’t optional - it’s a must for creating apps that stand out in today’s crowded market. Start testing early, involve your team, and continuously refine based on user feedback.
Mobile Usability Testing Best Practices
Planning Your Usability Test
Careful preparation is the backbone of effective usability testing. Without a clear plan, even the most detailed tests can overlook valuable insights. To get it right, you need to clarify your objectives, identify the right participants, and design a suitable testing environment. Start by outlining specific goals that will guide every stage of the process.
Setting Clear Testing Goals
The first step is to define what you want to achieve. Are you looking to pinpoint issues in your checkout process, assess how easily new users complete their first task, or measure the impact of recent design updates? Whatever the case, setting clear goals keeps your testing focused and productive.
For example, Shopify used usability testing to gain insights into freelancer hiring habits, which directly influenced their product decisions. By establishing specific objectives early, you can decide what to test, which tasks to assign, and the best methods for collecting feedback. Don’t forget to define success criteria and allocate a budget to ensure your efforts stay on track. Once your goals are set, the next step is to recruit participants who truly represent your target audience.
Identifying Your Target Users
To get meaningful results, you need participants who reflect your actual or potential customer base. The right participants can make all the difference, so it’s crucial to choose carefully.
"To identify target users for a usability test, first define who your product or service is for. Consider factors like age, location, interests, and tech skills. Choose participants who match these characteristics to ensure your test results accurately reflect your actual users' experiences and needs."
– Ishita Garg, UI/UX Designer
Start by analyzing your current user data to understand their behaviors, preferences, and needs. Use this information to create user personas that represent different audience segments. Screening questions are another key tool for refining your participant pool. They help confirm that participants meet your criteria and weed out those who might skew the results.
For example, in a test for an educational app, screening questions identified parents of elementary school children. These questions covered topics like the number and ages of their kids, the educational tools they used, and their involvement in their children’s learning. This approach ensured the test focused on a relevant group.
"Screening participants is like auditioning talent for a show. Use surveys, questionnaires, or interviews to find your stars. Make sure they fit your criteria and have backup participants – because sometimes stars can be divas and not show up."
– Omran Khleifat, UX Leader
As you recruit, keep ethical considerations in mind. Ensure participants give informed consent and understand how their data will be used. In the U.S., privacy regulations require transparency about data storage and usage.
Choosing Devices and Platforms
Your usability tests should reflect how your users interact with your product in real life. That means selecting devices that match your audience's actual preferences. Focus on what your users are using - not just the latest gadgets.
Market research can help pinpoint which devices are most popular. In the U.S., Apple holds 29% of the smartphone market, followed by Samsung at 25.4% and Xiaomi at 11.6%. Among Android users in 2023, Android 13 is the most used version at 24.35%, while iOS 17 dominates with 66% of usage on Apple devices.
Testing across a range of devices is critical. Include different operating systems, screen sizes, and resolutions. Simulate various connectivity scenarios, from strong Wi-Fi to weak mobile networks, and even offline modes. The number of devices you test will depend on your company’s size and resources:
Company Type | Device Count | Key Android Devices | Key iOS Devices |
---|---|---|---|
Startups | 10–15 | Samsung Galaxy S23, Google Pixel 8/7, Samsung Galaxy S20/21, Samsung Galaxy A50/A51/A52 | iPhone 15, iPhone SE (2nd/3rd gen), iPhone X/XS, iPhone 6S/7/8 |
SMB | 15–20 | All startup devices plus Samsung Galaxy S23/S22 Ultra, Samsung Note 20/Note 20 Ultra, Google Pixel 8/7 Pro | All startup devices plus iPhone 15/14/13 Pro Max, iPhone 13/12 mini, iPad Mini/Pro |
Enterprise | 25+ | All SMB devices plus Samsung Galaxy Z Fold5, Samsung Galaxy Z Flip5, Xiaomi 14 Ultra, Huawei P60/P60 Pro, OPPO Find X5 | All SMB devices plus additional iPhone/iPad models of different sizes |
Collaborate with your marketing team to identify user personas and usage patterns. Tools like Google Analytics or Glassbox can provide valuable insights into the devices your audience prefers.
When testing, ensure participants use devices they are comfortable with. Aim to test with 3–4 users per device for more accurate results. Keep in mind that market trends shift quickly, especially in the U.S. mobile market. Regularly update your device list to stay aligned with user preferences. By testing a diverse set of devices, you’ll gather insights that mirror real-world usage, making your findings more reliable.
Usability Testing Methods
Once your testing plan is ready, the next step is choosing the best way to gather user feedback. Different methods have unique strengths depending on your goals, timeline, and budget. Below, we'll break down some common approaches and their trade-offs to help you identify the most effective way to pinpoint usability issues in your mobile app.
Common Testing Methods
There are several ways to conduct usability testing for mobile apps, each offering distinct insights. A key difference lies in moderated versus unmoderated testing and remote versus in-person approaches.
Moderated testing involves a facilitator guiding participants through tasks and asking follow-up questions. This method provides rich feedback since moderators can dig deeper into user behavior and clarify responses. However, it requires more time and coordination to execute.
Unmoderated testing allows users to complete tasks independently, following pre-written instructions and sharing feedback through surveys or recorded sessions. This method is budget-friendly and enables users to interact with the app in their usual environment, although it may lack the depth of moderated testing.
Remote testing lets participants use your app from their own location - whether at home, work, or elsewhere. This approach captures authentic usage patterns in familiar settings, provided participants have a compatible device and internet access.
In-person lab testing takes place in a controlled environment where researchers can closely observe user interactions. This method collects detailed data, including body language and immediate user reactions, but the setting may feel less natural than real-world usage.
Specialized methods can focus on particular aspects of usability:
- Eye-tracking tests use devices to monitor where users focus their attention on the screen, revealing patterns in visual hierarchy and design.
- Screen recording tests track users' navigation through tasks, offering a detailed view of their journey.
- Card sorting helps optimize navigation by asking users to group app features into logical categories.
- Guerilla testing is a quick and low-cost option where researchers approach people in public spaces like cafes or malls, asking them to complete short tasks in exchange for a small reward. While fast, this method may lack depth and could introduce bias since participants may not represent your target audience.
Comparing Testing Methods
Each method comes with its own balance of cost, speed, depth, and practicality. Here's a quick comparison to help you decide:
Method | Best For | Advantages | Disadvantages |
---|---|---|---|
Moderated Remote | Complex tasks, detailed feedback | In-depth insights, flexible participant pool | Requires scheduling; potential tech issues |
Unmoderated Remote | Quick feedback, natural usage | Scalable, time-efficient, authentic behavior | Limited follow-up; less detailed insights |
In-Person Lab | Early prototypes, complex observations | Rich data in a controlled setting | Costly; artificial environment |
Guerilla Testing | Early feedback, tight budgets | Fast results, minimal planning | Limited depth; participant bias |
Eye-Tracking | Visual design optimization | Precise attention data, objective insights | Requires specialized equipment |
Your choice will depend on your app's development stage and specific questions. For example, in-person testing works well for early prototypes, where moderators can explain incomplete features. On the other hand, unmoderated remote testing is ideal for established apps, offering insight into natural usage patterns.
Cost is another factor. Remote testing is generally cheaper than lab-based studies, as it avoids expenses like facility rentals and travel. However, these savings may come at the cost of some control over the testing environment and data richness.
Timelines also matter. Unmoderated remote testing can be launched within days and gather feedback from multiple users simultaneously. In contrast, in-person studies often require weeks of planning and execution.
According to research from the Nielsen Norman Group, testing just five participants in one cycle can uncover 85% of usability issues.
Usability Testing Tools
The right tools can turn user interactions into actionable insights, especially for complex mobile app projects. Here's a look at some key tools that can enhance your testing process:
- Screen recording and session replay tools: These capture user interactions, including touch gestures, scrolling, and time spent on each screen. Advanced tools can even detect factors like device orientation or network issues.
- Remote testing platforms: These manage the entire testing process, from recruiting participants to analyzing results. Features like live session monitoring and automated feedback analysis streamline the workflow.
- Analytics integration: By linking usability testing data with app metrics, you can see how usability issues impact user behavior. For example, you can compare confusing checkout flows identified in testing with actual cart abandonment rates.
- Device testing capabilities: These ensure your app is tested across various operating systems, screen sizes, and network conditions, reflecting the diversity of your audience.
- Collaboration features: Tools that allow team members to review sessions, add comments, and track issue resolution make it easier to turn insights into product improvements. Security features like data encryption and user consent management are also critical when handling sensitive information.
Often, the best results come from using a combination of tools. For instance, screen recording tools can provide detailed interaction data, while survey platforms collect structured feedback, and analytics tools track long-term impacts. This multi-tool approach ensures you get a well-rounded view of your app's usability, helping you make data-driven design decisions.
Running Your Usability Test
Once you've chosen your methods and tools, it's time to conduct your usability test. This step involves careful planning around participant recruitment, session management, and data collection to ensure you gain insights that can refine your mobile app.
Finding the Right Participants
Start by recruiting participants who match your target user profile. For example, if you're testing a banking app, look for users aged 25–55 who actively use mobile banking on iOS or Android at least twice a week.
Keep your criteria simple to avoid overcomplicating recruitment. Too many restrictions can make finding participants expensive and time-consuming. For instance, if you're testing a fitness app's onboarding process, focus on users new to fitness apps rather than narrowing it down to specific workout routines or gym memberships.
Create a short screener survey (fewer than 10 questions) to identify suitable participants. Avoid questions that might reveal the purpose of your study, as this could bias their responses. Instead of directly asking, "How often do you use food delivery apps?" consider broader questions about their general mobile app habits. Include open-ended questions to gauge their ability to articulate thoughts clearly, which is essential for collecting useful feedback.
"Specific, actionable, and practical research questions are most effective." - Erika Hall
Decide on your sample size based on your research goals. For qualitative testing aimed at finding usability issues, 3–5 participants are usually enough to uncover about 80% of the problems. If you're conducting a quantitative study to measure metrics, you'll need a larger group to ensure statistical reliability.
Offer appropriate incentives to thank participants for their time. For a 60-minute session, consider a $75–100 gift card, though professional or niche audiences might require higher compensation. Use digital scheduling tools to organize sessions and confirm details via email. Send a follow-up 24–48 hours before each session to minimize no-shows, which could disrupt your timeline and budget.
Once you've confirmed participants, shift your focus to running effective test sessions.
Managing Test Sessions
A well-structured session is key to gathering unbiased, actionable feedback. Start by preparing a test script that outlines your goals, tasks, and team roles. This ensures consistency across all sessions.
Set a welcoming tone by explaining the session's purpose at the beginning. Let participants know they're testing the app - not being tested themselves - and encourage them to think aloud as they navigate. This helps reduce anxiety and encourages natural behavior.
Pay attention to both on-screen actions and non-verbal cues, like facial expressions or body language, which can indicate confusion or frustration. For example, a participant might complete a task but show visible signs of difficulty, offering valuable insight.
When participants ask questions during the test, use the boomerang technique to redirect the question back to them. If someone asks, "Do I click here to submit my order?" respond with, "What do you think?" or "What would you do next?" This approach reveals their natural thought process without introducing bias.
Stay adaptable if participants deviate from your planned tasks. Unexpected behaviors often highlight real-world usage patterns that might not align with your assumptions. Instead of steering them back to your script, observe and learn from their choices.
Document key actions and direct quotes during the session, noting timestamps for easy reference later. Avoid explaining design choices or defending the app, as this could influence their feedback.
"The golden rule is to stop when you no longer learn anything new." - Matthieu Dixte, Product Researcher at Maze
For unmoderated tests, keep instructions simple and tasks straightforward since participants won't have real-time guidance. Limit these sessions to around 15 minutes to maintain engagement, as users are more likely to drop out without a moderator present.
Collecting Test Data
As your sessions progress, focus on systematically gathering data to capture the most important insights. Usability tests can produce a lot of information, so it's essential to stay organized and aligned with your research questions. Combine quantitative metrics with qualitative feedback for a well-rounded understanding of the user experience.
Track quantitative data like task completion rates, time spent, and error rates alongside qualitative observations such as user comments and signs of frustration. These metrics provide benchmarks for improvement and help compare different design iterations. Pay attention to specific actions, like the number of taps, paths users take, and points where they abandon tasks.
"Analyze with confidence by collecting relevant data, critically assessing it, and forming testable explanations." - Maria Rosala, NN/g
Use tools like Excel or Airtable to organize your data digitally. Categorize issues - such as navigation challenges, content clarity, or visual design concerns - to identify patterns across participants.
Evaluate the relevance of feedback as you collect it. Not all comments are equally useful, so prioritize input that aligns closely with your target user profile and research goals. Weigh feedback accordingly rather than treating all responses the same.
To track issues effectively, create a matrix with participants in columns and problems in rows. This format makes it easy to see which issues are recurring versus isolated. For instance, if three out of five participants struggle with the same checkout button, that's a clear usability issue that needs immediate attention.
"Research results need to be organized, synthesized, and analyzed, preferably in a partnership or group to broaden the perspective." - Taylor Palmer, Product Design Lead at Range
Don't overlook insights from pre-task conversations and post-task discussions. These moments often reveal user expectations and mental models that can inform future design decisions.
Analyzing Results and Making Improvements
Turning raw data into actionable steps is key to improving your mobile app's user experience. This phase is all about transforming observations into meaningful changes that make your app more intuitive and user-friendly.
Analyzing Your Test Data
Start by revisiting your original testing goals to ensure you’re focusing on the feedback that matters most. Use tools like Excel or Airtable to group findings into categories such as navigation, content clarity, visual design, and technical issues. This organization helps streamline the analysis process.
"Research results need to be organized, synthesized, and analyzed, preferably in a partnership or group to broaden the perspective." - Taylor Palmer, Product Design Lead at Range
Pay attention to recurring patterns in the data. If several participants encounter the same issue, it’s likely a genuine usability problem rather than an isolated incident. For example, if three out of five users struggle to locate the checkout button, that’s a major design flaw that needs immediate attention.
Analyze both quantitative data (like task completion rates and time spent on tasks) and qualitative data (such as user comments and non-verbal cues). This dual approach helps you identify and rank issues more effectively. Break the process into clear steps: gather the data, verify its accuracy, interpret the findings, and ensure they align with your original research objectives.
To prioritize effectively, categorize issues by severity. A structured severity scale can guide your team on what to tackle first:
Severity Level | Description | Example |
---|---|---|
4 - Critical | Prevents users from completing tasks or disrupts the experience | No confirmation after payment or inability to sign up |
3 - Serious | Slows down user progress significantly | Navigation flow breaks or password reset fails |
2 - Medium | Frustrating but not task-blocking | Excessive scrolling or small text on key pages |
1 - Low | Minor, cosmetic issues | Typos in headings or outdated logos |
0 - No issue | False alarms or feature requests | N/A |
Once issues are categorized, the next step is prioritizing and addressing them.
Prioritizing and Fixing Issues
After sorting your findings, focus on fixing the problems that have the greatest impact on the user experience. Not all issues are created equal, and with limited resources, it’s crucial to prioritize wisely.
Start by addressing problems that block essential tasks. For instance, navigation issues are critical since 67% of users uninstall apps due to unclear navigation or insufficient information. Tools like the RICE framework (Reach, Impact, Confidence, Effort) or a Value vs. Effort Matrix can help you determine which fixes will deliver the most benefit with the least effort.
"I've seen the most success with usability testing when designers, engineers, and product managers are involved in understanding how people actually use or don't use a product. You want people who have the ability to make decisions to be involved, so you can make sure that all of your research is building a case for the important decisions that you want to make." - Behzod Sirjani, Founder of Yet Another Studio
Define clear prioritization criteria, focusing on user impact, risk, and alignment with business goals. Consider using incremental delivery methods like feature flagging or staged rollouts. This approach allows you to fix urgent issues quickly while planning broader design changes for future updates.
Track improvements using short-term metrics (e.g., task success rates, user satisfaction) alongside long-term indicators like retention and lifetime value. Balancing these metrics ensures you’re addressing immediate concerns while keeping an eye on the bigger picture. Also, take the opportunity to tackle technical and design debt to prevent recurring usability challenges.
Once fixes are implemented, document everything for future reference.
Documenting Your Findings
Good documentation is the backbone of effective development. It consolidates insights and ensures that everyone on your team is aligned. Create reports that highlight high-priority issues, complete with evidence and proposed solutions.
Be precise. Don’t just say "users struggled with navigation." Instead, specify which elements caused confusion and why. Pinpoint the design flaws or user flow issues and propose actionable fixes, like repositioning a button or clarifying ambiguous labels.
Your documentation should include:
- A summary of usability test goals and methodology
- Participant demographics
- Key findings, organized by severity
- Specific problem areas with context and evidence
- Proposed solutions for each issue
Avoid lengthy, overly technical reports. Keep your documentation concise and actionable so it’s easy for busy development teams to digest. Group similar issues together and rank them by severity to help your team focus on what matters most.
Clear documentation does more than just guide immediate fixes - it builds institutional knowledge. Companies like Essential Designs, known for custom mobile apps and UI/UX design, rely on detailed usability documentation to deliver user-centered solutions across industries like healthcare, technology, and finance. These insights inform every stage of development, from wireframing to deployment.
Collaboration doesn’t end with testing. Keep the conversation going with your team to ensure everyone stays aligned when analyzing results and planning improvements.
sbb-itb-aa1ee74
Building Usability Testing into Development
The best mobile apps don’t leave usability testing as an afterthought - they weave it into every stage of development. By catching issues early and often, teams can refine the user experience at each step. This approach aligns perfectly with agile development, which thrives on continuous feedback and improvement. Testing throughout each sprint allows teams to validate design choices in real-time, fix problems quickly, and maintain high-quality standards. Importantly, usability testing isn’t a solo endeavor. It’s a collaborative process that involves business analysts, developers, and testers working together.
Usability Testing in Agile Development
Here’s how agile teams successfully integrate usability testing into their workflows:
-
Start testing with wireframes and prototypes.
You don’t need a fully developed app to begin usability testing. Early evaluations of prototypes or mockups can catch design flaws before full-scale development, saving time and effort. -
Dedicate sprint time for focused testing.
Skipping usability tests during sprints can lead to discovering critical issues too late in the process. -
Act on feedback immediately.
Debrief after each test session to identify what’s not working. Quick analysis and actionable changes ensure the feedback directly improves the product. -
Leverage sprint retrospectives for usability insights.
Use these regular reviews to discuss testing results and keep the user experience at the forefront of development. -
Standardize testing processes.
Create templates for test notes, plans, and usability stories to streamline testing across sprints.
"We did tons of user research to determine what were the models, the messages, et cetera that we were going to go live with, but it was all about the testing and iterative approach that followed that got us to where we are today. So there is a moment where you almost have to take a leap of faith and have a very solid testing plan post-launch that takes you to where you ultimately want to be, which can look very different than what you originally thought."
– Ryan Daly Gallardo, SVP of Consumer Products at Dow Jones (Wall Street Journal)
Frequent usability testing pays off. Nearly 90% of users abandon apps due to poor performance, and 88% of consumers avoid websites with bad user experiences. With over 6.92 billion smartphone users worldwide in 2023, even small usability problems can have a massive impact. This agile approach ensures that testing and development work hand-in-hand, fostering collaboration and improving usability.
Working with Development Partners
Once agile testing routines are in place, teaming up with experienced development partners can further enhance usability. These partnerships bring specialized expertise to ensure user-centric design throughout the process.
-
Choose partners who prioritize user needs.
Seek teams that go beyond building what’s specified - they should focus on delivering what users genuinely need. For example, Essential Designs offers end-to-end services, from planning and wireframing to design, coding, testing, and deployment, ensuring usability is a priority at every stage. This approach is especially valuable in industries like healthcare, technology, and finance, where user experience is critical. -
Ensure flexibility in testing schedules.
Agile UX testing happens alongside design and development sprints. Your partner should adapt quickly to feedback and keep improvements on track. -
Look for sector-specific expertise.
Different industries have unique demands. Healthcare apps must meet accessibility standards, financial apps need clear security features, and real estate apps should simplify complex data for mobile users. -
Maintain open communication.
Developers, designers, and product managers must collaborate closely, using test results to guide decisions. Your development partner should actively participate in testing sessions and retrospectives to ensure feedback is effectively implemented. -
Plan for iterative improvements.
Agile development thrives on incremental changes. A good partner will embrace usability testing feedback as an opportunity for refinement, not as scope creep.
"Balance confidence with humility: build a prototype, ship V1, then iterate based on feedback."
– Tomer London, Co-Founder at Gusto
Ultimately, the success of these partnerships is measured by how efficiently usability feedback is addressed. Regular testing during sprints not only enhances user satisfaction but also reduces costly fixes after launch.
Conclusion
Testing the usability of mobile apps is critical to keeping users engaged. With over 6.5 billion smartphone users globally and a staggering 2.8 million apps available on the Google Play Store alone, simply offering good functionality isn't enough. Your app must provide an experience that feels intuitive, responsive, and genuinely helpful.
Improving usability can lead to impressive results. For instance, every dollar spent on UX can yield a $100 return. These numbers highlight the tangible benefits of focusing on user experience rather than relying on assumptions.
"Mobile usability testing is the key to the success of any app." - LiveSession
The data makes one thing clear: testing should be an ongoing process. Apps that adapt based on user feedback often see up to a 30% boost in engagement after updates, and users are 80% more likely to stick with apps that evolve to meet their needs. Start testing early, refine during every sprint, and keep responding to feedback even after launch.
Collaborating with seasoned teams like Essential Designs can simplify the testing process and help you avoid costly post-launch fixes. As user research expert Jared M. Spool points out:
"User research takes more time upfront than guessing but saves time in the long run"
This advice is especially relevant considering that developers spend up to 50% of their time on preventable fixes.
High abandonment rates serve as a reminder: even the best features fall flat if users can't access them easily. The goal isn't to achieve perfection from day one but to create a foundation for continuous improvement, guided by user feedback.
FAQs
What’s the difference between moderated and unmoderated usability testing, and when should you use each?
Moderated vs. Unmoderated Usability Testing
Moderated usability testing involves having a facilitator work directly with participants as they navigate tasks in real time. This setup offers the chance to gather instant feedback, ask follow-up questions, and dig deeper into complex user interactions or emotional reactions. It's especially useful for exploring intricate features or gaining a detailed understanding of how users behave in specific scenarios.
Unmoderated usability testing, on the other hand, takes a more hands-off approach. Participants complete tasks on their own without a facilitator present. This method is quicker, allows for scaling up, and is perfect for collecting quantitative data from a larger, more varied group. It’s ideal for testing simpler user flows or gathering early-stage feedback.
Use moderated testing when you need to thoroughly examine user experiences, and opt for unmoderated testing when speed and broad input from many users are your priorities.
How do I recruit participants who accurately represent my target audience for mobile app usability testing?
To find participants who genuinely represent your target audience, begin by setting specific criteria that align with your users' demographics, behaviors, and preferences. Craft screener questions to filter potential candidates and confirm they match your ideal profile.
Use a mix of recruitment strategies - like social media campaigns, email invitations, and professional recruitment services - to tap into a broad and diverse pool of participants. Be diligent in screening to ensure relevance and reduce bias. Rotating participants regularly can also bring in fresh viewpoints. By following these steps, you’ll gather usability testing insights that are both reliable and practical.
What are the biggest challenges in mobile app usability testing, and how can they be solved to ensure accurate results?
Common Challenges in Mobile App Usability Testing
Testing a mobile app for usability can be tricky. Some of the most frequent hurdles include participant bias, poorly crafted tasks, and technical hiccups during the testing process. If these issues aren't handled properly, the results can end up being skewed or incomplete.
So, how do you tackle these problems? Start by setting clear goals for your testing. Knowing exactly what you want to learn will keep the process focused. Next, choose participants who closely resemble your app's target users. This way, their feedback is more relevant and less likely to be influenced by bias.
Make sure the tasks you assign are realistic and well thought out. Clear instructions help participants navigate the app naturally, giving you better insights. Also, test your app across a variety of devices to catch any technical bugs early on. And remember, planning is everything - taking the time to prepare thoroughly rather than rushing through will lead to more reliable and useful findings.