Coin Toss
Reducing Travel Decision Fatigue with Generative AI

Overview
Travel planning often turns stressful, not from a lack of options, but from having to make too many decisions at once.
Between researching destinations, comparing accommodations, coordinating group preferences, and navigating safety concerns, travelers face 20+ micro-decisions before a trip even begins.
What should be exciting becomes overwhelming.
Goals
What if AI didn't just generate options, but reduced the decisions you have to make?
By distilling 20+ complex considerations into 3-4 strategic choices, we aim to create itineraries that feel personal, safe, and decisively yours, without the overwhelm.
By turning hours of travel research into minutes of clear decisions, Coin Toss captures user intent during to generate reliable, personalized outputs. At the same time, preserves user agency while guiding travelers toward stress-free plans.
Role
Product Designer
Timeline
April 2024
Team
2x Designers,
2x Developers,
1x Researcher
Tools
Figma, Miro, Notion, Zoom, UserBerry, Twilio
Impact
55%
Reduction in planning time
60%
Fewer upfront decisions
2x
faster planning time
PROBLEM
Planning Trips Is Difficult, But Not for the Reason You Think
Planning a trip requires 20+ decisions including destination, bookings, daily activities, restaurants, safety considerations.
Travelers bounce between different apps trying to piece together information, while AI tools generate overwhelming lists with no structure. The problem isn't lack of options, it's too many decisions with no framework to make them.
CURRENT USER JOURNEY:

PROPOSED USER JOURNEY:

USER RESEARCH
80% of users felt overwhelmed planning trips
Before diving into design, we wanted to ensure this was a real, shared pain point. So, we conducted in-depth interviews with 9 individuals. From solo travelers to group trip planners, we explored their travel habits, frustrations, and aspirations. We also wanted to understand, why the existing tools were failing, and what would make AI actually useful for travel planning?"

“ChatGPT gave me a list of things to do in Barcelona. Cool, but now what? I still didn’t know how it fit my budget, travel style or if these places are even close to each other.”
— Interviewee 4
KEY INSIGHTS and PAIN POINTS
First-Timer vs Returner Mental Model
First-timers seek landmarks and safety guidance, while returners want local, off-the-beaten-path experiences, but existing tools treat them the same.
Generic AI Outputs
6 participants had tried ChatGPT for planning. All reported feeling overwhelmed by unstructured, generic outputs requiring extensive validation.
Budget Anxiety Is Real But Unspoken
Although hesitant to discuss budget, participants consistently cited cost as a key factor and wanted guidance without feeling constrained or judged.
Safety and Cultural Awareness Concerns
Six interviewees raised a major concern around safety and lack of knowledge about social and cultural norms in unfamiliar destinations.
How might we help travelers make fewer, better decisions, and also generate personalized itineraries that reduce the overwhelm of planning?
DESIGN PROCESS
Three Steps to a Personalized Itinerary
This flow highlights how key preferences and input shape personalized results, while keeping the final decision autonomy in the users hand. Optional loops, like regenerating itineraries, and customization were kept visible to support exploration without overwhelming the user.
USER FLOW

IDEATING SOLUTIONS 1.1
Less confusion, faster decisions and seamless interaction
Rapid sketching allowed us to explore different layouts, interactions, and visual patterns, helping us identify the most intuitive designs that minimized friction and empowered user choice. We then refined and combined the strongest elements into a cohesive concept for wireframing.

The early sketches of Coin Toss
We translated our sketches into mid-fidelity wireframes, exploring different parts of the flow. Despite time constraints, we created multiple versions, quickly tested them, and collaboratively refined the strongest ideas into a cohesive solution.

The early iterations of Coin Toss
USER JOURNEY STEP 1
HIGH FIDELITY PROTOTYPE
Turning Preferences Into System Logic
A lightweight onboarding that captures only the inputs that influence the itinerary, minimizing effort while maximizing signal.
DESIGN FOR NON-DETERMINISTIC SPACES
User Input as Model Constraints
This flow shows how an open-ended travel planning task was translated into a structured AI workflow.
Instead of chatting with a generic bot, users provide a small set of focused inputs such as destination, dates, budget, and travel style. These inputs form a prompt schema that clearly defines user context, constraints, and the AI’s role. The output is rendered as a clean, day-by-day itinerary with activities, timing, travel distance, and estimated costs.
By tightly linking inputs, prompt structure, and visual output, the system keeps AI behavior transparent, editable, and user-controlled, delivering personalized plans without overwhelming choice.

User input and AI output
ITERATION AND TESTING
Exploring Ways to Tell the Trip Story
We explored multiple UI patterns for presenting itineraries. Early concepts were highly visual and card-heavy, offering many options per time slot, but they introduced information overload and excessive micro-decisions. A second iteration reduced visual noise but buried key details, making choices feel unclear and lowering user confidence.
The final pattern tested best by presenting two clear itinerary options with a lightweight customize flow. Vertically structured cards surface time, category, and cost at a glance, while simple regenerate and proceed actions keep the experience scannable, confident, and user-controlled.

USER JOURNEY STEP 2
HIGH FIDELITY PROTOTYPE
Comparing Itineraries
The AI explores multiple valid solutions in parallel, surfacing two different itineraries while keeping the final decision firmly in the user’s control.
ITERATION AND TESTING
Why Two Itineraries Worked Best
I tested three levels of choice: a single itinerary, two itineraries, and three‑plus itineraries. The heat maps show where people’s attention went, and the notes capture behavior and sentiment.
With a single itinerary, people scanned quickly but often hit ‘regenerate’ and questioned the validity of the AI. With three to five itineraries, people struggled to remember differences, felt options blending together, and decision fatigue came back.
Two itineraries turned out to be the sweet spot. It gave enough contrast for users to compare and say ‘this feels more like me’ without feeling overwhelmed. That insight drove the final pattern of offering two AI‑generated plans.

ITERATION AND TESTING
Confidence Levels and Human Check
On top of choosing between itineraries, we also experimented with how the system builds further trust with the users.
We added a ‘Recommended’ chip on activities where the model was most confident and a small verification tooltip for items that might change on the ground, like hours or pricing. That created a human‑check layer and kept the human firmly in control of the final plan.

USER JOURNEY STEP 2
HIGH FIDELITY PROTOTYPE
But what if I like parts of both itineraries?
Users could combine elements from two AI-generated itineraries, preserving flexibility and control without reintroducing planning fatigue.
EDGE CASE SCENARIOS
What happens when the AI gets it wrong?
Our early designs focused on the happy path, but as we iterated, a key question emerged: what happens when the AI gets it wrong? Given the non-deterministic nature of the model, we identified common failure scenarios and translated them into explicit design considerations. These informed the final solution, ensuring users could be aware of, adjust and stay in control even when the AI’s output missed the mark.


Key Learnings
AI UX as an end-to-end collaborative system
Conversations with engineers clarified that personalization requires constraints, not creativity alone. My role became translating user intent into structured inputs through design, effectively shaping the constraints the model operates within. I stopped treating AI as a feature and started treating it as a system shaped by design decisions.
Prompts are product decisions
Prompt design emerged as a core product responsibility, not a technical afterthought. Every instruction, constraint, and output format directly shaped what users experienced. Treating prompts as product decisions ensured consistency, predictability, and alignment with user expectations, reinforcing that AI behavior is part of the interface.
Trust is built through transparency
User trust increased when the system clearly showed how inputs influenced outputs. By making AI behavior visible and editable, users felt more confident engaging with the results, even when they wanted changes.
Next Steps
Future iterations would explore more agentic AI behaviors, such as assisting with bookings and real-time adjustments, while maintaining user control. I’d also conduct longitudinal testing to understand how preferences evolve over time and use those insights to refine onboarding questions and continuously improve recommendation quality.
