by Alexander Fandén
YCS is an enterprise tool where hotels can manage rates and allotment, bookings, promotions and much more. The YCS app was launched early 2018 with a limited set of features. During 2019 we set out to expand on the app and build out some of the wanted features.
Our goal with this project was to provide an overview of hotel performance and operations, thus enabling hotels to make better decisions and accelerate their business with Agoda.
To get a better understanding on where to focus our efforts, we conducted both qualitative and quantitative research. We sent out surveys to hotels and asked them to rank their most important features. We also travelled to some of our key markets to conduct interviews with hoteliers to get a better understanding of their wants and needs.
When we talked to our users, it became clear that we weren’t providing sufficient insights on how they perform. The most common way for hotels to get insights was in monthly meetings with a Market Manager from Agoda, who would bring a printed monthly report and discuss areas of improvements face-to-face.
While this report was appreciated, it didn’t provide all the data our hotels wanted, and didn’t allow them to customize it the way they needed. By providing a more flexible way to generate these insights, we believe that hotels would be able to make better decisions. And we would also reduce the labour-intensive process of printing reports and visiting hotels.
I lead the design of the overall app experience and worked as the UX/UI Designer throughout this project. I worked together with a UX Researcher, Copywriter, Prototyper, Product Owner and a team of Developers and QA Engineers.
In order to understand what data our hotels needed and which tasks they normally performed, we had to immerse ourselves into the daily tasks of the hotels and their staff, as well as our Market Managers who provide data to the hotels. We shadowed our staff and took note of the most common tasks they performed during a day, and scheduled hotel visits to understand how they interacted with our systems and what data was most valuable for them.
Not all hotels operate in the same way, and it was important for us to cover as many types of markets, hotels and users as possible on our visits. We targeted specific hotels in our biggest markets from the data of active users on both app and desktop. With the help of local Partner Services teams, we could schedule visits with a diverse set of hotels, from small family-run boutique hotels in Taichung, Taiwan to larger chain hotels in Cebu, Philippines.
During our hotel visits, we interviewed the staff to understand how and when they used our system. We asked them to rank different features and data by importance, using card sorting.
We consolidated the learnings and mapped out the most wanted features across the different types of hotels, markets and roles within hotels. By mapping out feature needs by role, it became clear that different roles have different needs:
We decided to focus on the decision makers because they have a bigger say at the hotel. We weren’t catering to their needs at all at the time, whilst operational staff already had the tools they needed on our website. Providing performance data was going to be a bigger challenge for us, and therefore something we wanted to tackle right away.
I started off by creating a mind map of all scores, metrics, events and other interesting pieces of information that might help hotels make better decisions. It included data that we already had, data that was requested by our hotels, data that our competitors provided and some new ideas.
While the different types of data might provide some useful information on their own, we believed that we could generate much more meaningful insights by combining them. So, I had to come up with some formula to generate these insights. Inspired by how Google Analytics lets you generate reports, I grouped the data into different categories:
By comparing the data using other dimensions or time frames, we could generate new insights.
For example, if a hotelier analyses her bookings for the last 30 days for each nationality, compared to her competitors, she might find that her property attracts a significantly lower number of guests from the Japanese market.
At this stage, our new tool can provide suggestions by looking at her hotel, their competitors and trends for Japanese travellers. For example, there might be certain amenities that are important for Japanese travellers, which her hotel doesn’t provide.
With the insight, she can take action immediately.
With all the potential data points defined and a formula to generate insights, I started to work on some initial concepts. At the time, the landing page of our app was a very basic feed of generic suggestions. The engagement was low and our users were reluctant to take the suggestions, so we decided to replace this feed altogether with our new solution.
We wanted to cater to both our key user groups from the get-go—the decision makers and the operational staff. Therefore, I had to come up with a homepage that displayed both performance related data and activities at the hotel.
The first concept we wanted to try out was a dashboard with two tabs:
Our assumption was that this would be the clearest way to group the content since there was such a clear distinction between our users. Based on the wireframes, I mocked up high fidelity designs and created a simple prototype with InVision.
We invited Bangkok-based hotels to our office to test the prototype. It quickly turned out that the tabbed approach didn’t work. Almost none of the test participants noticed the tabs, and the labels of the tabs (especially “Operation”) were not clear enough.
We renamed the tab “My feed” and designed an alternate layout that combined the performance and operations into one page. We invited more hotels and let them play around with both solutions.
The renamed tab didn’t help the users discover the tabbed navigation. However, when we provided all the content within the same page, we noticed that hoteliers instantly started to scroll the feed to find out more. In this design, we provided filters based on the type of content: Performance, Bookings, Messages etc. Almost all participants tried to interact with these filters.
With these insights, we decided to move forward with the one-page solution.
One of the main features on the dashboard was a set of scores that would help hotels understand how well they are performing:
We set out to see if the hotels would understand these scores, and if they could find ways to improve the scores by using the new dashboard.
During the first round of usability tests we were using the InVision app on one of our test devices. This app offers a specific mode for usability testing, however I forgot to instruct my researcher on how to turn this on prior to the labs - my bad!
The usability testing mode only enables the users to navigate via the hotspots I have added to the prototype - in contrary to the default mode, where users can swipe between screens, and therefore abandon the intended navigation. Both test subjects in our morning session discovered this trick as they were trying to swipe on the line chart in the left screen below:
They expected to be able to go back and forth on the time-frame in the line chart, but instead they ended up at a different screen, the Content score (middle). Both test subjects had a similar reaction: “Oh! I don’t have to click back and then click again to see the next score - nice!”.
This accident provided two valuable insights to our team:
With these exciting insights I set out to adjust the navigation of the scores:
While the users found the swipe navigation easy to use, that was not what they intended to do. So I added dots underneath the scores in order to increase the affordance of swiping, and to indicate how many other scores we provide. I also added a hint of the next score in the corners of the screen to make it more obvious.
This type of navigation is not really possible to prototype in InVision, and therefore I pulled in one of our Prototypers to the project. With Framer, we were able to build more advanced and interactive prototypes, supporting all the new interactions we wanted to test:
Building a prototype in Framer, or any tool that requires coding, is fairly time consuming. What would take me a couple of hours with InVision took us a week or more to develop with Framer.
With this in mind, we decided to limit the prototype to some key features:
As the development of the prototype was quite time consuming, I had time to build out the remaining designs and document all the new components.
In my initial designs I had set up guidelines on how to use colors in our data visualization. I wanted to distinguish many different things with color:
As I added more data to the designs, I started to run out of colors to use from our palette. On top of this, we noticed that a couple of test participants confused our pink and turquoise with red and green. As a result, they thought they were performing badly on metrics where we used pink, and good where we used turquoise, even if that wasn’t the case.
Our team is usually very hesitant to introduce new colors or UI elements in our designs, but we decided that this was a good enough reason to introduce more colors to the palette.
After visiting four markets to do our initial research (Thailand, Taiwan, Japan and Philippines) and three markets to validate our designs (Thailand, Malaysia and Singapore), we felt confident enough to move onto the next stage of the project. Now we had to decide on a plan to build and roll out the new features!
The performance metrics and scores were for the most part new. We had to pull in data analysts and our back-end team to figure out how to calculate scores and generate the data for the hotels.
The operations-focused activity feed contained a lot of information that we already displayed on our website, and would therefore be a lot faster to ship. So we decided to roll out these first, whilst also building the back-end logic for the performance metrics and scores.
At the time of writing this, we’re just about to start development of the MVP version.
Moving between different levels of fidelity when designing is key. Low fidelity wireframes work great for quick iterations of different concepts and idea generation. However, when performing usability labs, it proves useful to use a prototype that is as realistic as possible.
After talking to 15-20 hotels, we noticed that some hoteliers are used to crunching numbers and analyzing their performance regularly and use it as a basis for their business decisions. Others tend to play by ear. In general, the first group would understand how to read the graphs, analyze the numbers and make sense of the prototype. The second group sometimes struggled with abbreviations, legends for graphs and more. We knew we wouldn’t be able to cater to both groups, at least not in the near future, and therefore we had to make a decision - do we cater to the savvier users, or do we simplify it so that everybody can understand it?
When we interviewed the users, we asked them about our competitors and how they use their tools. The less savvy users did not use any of their performance tools today - therefore it’s unlikely that they would use our tool in the future. Based on that we decided to cater to the savvier users.
Alexander Fandén (UX Design)
Trey Hurst (UX Research)
Muhammad Athar (Prototyper)
Wesley Hsu (Copywriter)
Ido Hertz (Product Owner)
Tools I used