Redesigned the way from A to B
Moovit is a popular public transit navigation app used by hundreds of millions of people worldwide, including in Israel.
One of its core screens is the itinerary, which guides riders through their planned or ongoing routes. While we always felt the design wasn’t fully optimized, classic data analysis and user interviews didn’t help us identify pain points, and shadowing users and capturing their real-time thoughts proved challenging.
To validate our intuition, we set out to gather insights through alternative methods, using our findings to shape a better, more user-friendly design.
Foreword
The itinerary screen is a core component of the trip-planning flow, providing users with a complete overview of their journey from A to B—including all stops, lines, and transfers. Given its importance, it was critical for us to assess how well it performed.
We had long suspected that the screen was not fully optimized, but traditional data analysis and user interviews didn’t yield clear, objective answers to key questions: Is the screen intuitive? Does the information structure make sense? Do users notice the essential details they need while navigating their trip?
Additionally, we knew that upcoming plans to introduce new features and transit types would require a more scalable design—something the existing structure couldn’t easily accommodate.
Faced with this challenge, we set out to validate our intuition with data and use our findings to create a more effective and scalable itinerary experience.
Identifying Pain Points
We started by asking our colleagues to plan a trip to a specific address using the app and then describe their route to us. This approach allowed us to observe what details they missed, where they hesitated, and when they became confused.
Building on these insights, we expanded our testing to real users through guerrilla testing at bus stops near our office—an easy task since Moovit users are everywhere in Israel! 😄 This hands-on method gave us valuable, real-world feedback that traditional data analysis couldn’t capture.
This method helped us identify problematic details and areas in the UI. We then analyzed the results, compared them with previous competitor research, and defined the key issues:
- The order of the route steps didn’t fully reflect the physical experience, causing confusion about the next steps.
- Important details were frequently overlooked by testers or were difficult to find.
Redesign process
The first step was to thoroughly analyze the existing screen structure, including its edge cases and supported components. At the same time, we outlined the new components we wanted to introduce, ensuring they were designed as generically as possible to create a scalable foundation for future needs.
From this, we developed a flexible structure where legs were built from repeatable elements representing different aspects of the route—locations, actions, and information.
Next, we carefully arranged these elements to better align with both the physical journey users take and their mental expectations, making navigation more intuitive and reducing confusion.
Validating the concept
We remotely tested the concept with users in London, taking advantage of the city’s complex public transit network—including buses, the Tube, and rail—to evaluate small details not commonly encountered in Israel, such as Tube pathways.
Using the Maze tool, we presented mockups of the same route in both the current version (control group) and the new version. The test was distributed via email to a large group of UK users and conducted without live moderation, with each group receiving a few dozen responses.
To simulate real-world urgency, we asked users identical questions about the route, instructing them to locate key details as quickly as possible and tap on them.
We then analyzed bounce rates, reaction times, misclicks, and other usability metrics between the groups. Maze also aggregated these factors into a “usability score” for each version, providing a clear comparison of their effectiveness.
To ensure that the improved route clarity was not only reflected in data but also felt by users, we asked them to rate each test based on how easy it was to follow.
The new version not only received a higher usability score but was also perceived as “easier to read” by users.
Conclusion
This process was addressing usability issues that couldn’t be directly measured through standard analytics or interviews. It taught us how to make informed assumptions, validate them with large-scale testing, and gather solid evidence to support our case.
When we presented the results to the development team, the data-driven insights made it easy to secure their buy-in. This vision ultimately laid the foundation for reworking the itinerary screen. The new design concept was later finalized and successfully rolled out into production, improving the user experience for millions of riders.