80% of users booked the cheapest room — not by preference, but because the page gave them no basis to compare. Surfacing each room's value lifted non-cheapest bookings by 32%.
Most bookings come from the cheapest room which is the lowest-margin option, creating a significant business gap at scale.
To understand the intent behind the data, we surveyed 5,200 users. Most said they'd happily pay more for a room that better fit their needs. But in actual booking data, only 20% came from higher-priced rooms.
From the survey responses and behavioral patterns, two distinct groups emerged within cheapest-room bookers: users who strictly stick to the lowest price, and users open to upgrading when value is clear. The split helped define who we were actually designing for.
We decided to focus on value-open users — travelers who actually browse rooms and want to understand the differences, but end up defaulting to the cheapest option because the page never gives them an actionable difference to weigh. They're the segment most likely to upgrade if the value is made visible.
Two main reasons surfaced from the survey, accounting for the majority of the gap.
To find out why, I audited the current room selection page and found three structural problems that left users with no way to compare.
32% saw the difference between rooms but felt the benefit didn't justify paying more. To understand why, I split the question into two hypotheses to test.
Users respond most to direct experience benefits — bigger room and better view — far above indirect levers like discounts or floor level. These two became the upsell's lead value.
In general, smaller premiums are likely to feel more reasonable, leading more users to consider upgrading. However, tolerance might vary by benefit — for example, a 10% premium for an ocean view feels different from a 10% premium for a higher floor. So we chose to validate it through experimentation by trying different ranges of price gaps.
I considered three concept-level ways to close the information gap, and chose the one that could validate the hypothesis within the existing constraints.
Page-level redesigns were out of scope. Users browse ~1.5 rooms on average, so this would require an intervention far down the fold.
Annotating every room spreads attention thin and dilutes the signal. The hypothesis needed one clear contrast point, not many.
Touches only the recommended room, creates a direct comparison high in scroll, and informs users without pressuring price-sensitive users.
Define what good looks like, weigh the trade-offs across five options, then commit.





Users mentioned that even when they understood the benefit, the price gap still felt too high. To solve this, I tied the allowed price premium to the actual benefit a user gets, so the price feels reasonable for what they get.
Three metrics defined upfront — primary, guardrail, business. Each card shows the gate I set, the data, and what it means.
Each phase shipped a small, focused change. No single design moved the metric on its own — the compounding effect drove the final outcome.
We shipped a recommendation banner inside the room card showing the benefit and price delta.
The click signal confirmed that users noticed the information gap, but it wasn't enough to convert.
I added a bottom sheet showing both rooms side by side so users could clearly compare the room types.
Conversion improved meaningfully for users who saw it, but the coverage was limited.
We expanded the coverage: including web users in the impact segment, and increasing the number of recommended rooms shown per user. With broader reach, we finally validated the hypothesis.
The same recognition gap exists one level down — 80% of users still pick the cheapest offer within a room. The next experiment applies the same pattern at the offer level, starting with breakfast: the clearest benefit, most available supply.
The design works. What's left is scale. 60% of users never saw the banner, and the same information gap exists one level deeper, at the offer level.