"When people tell you something is wrong, they're usually right. When they tell you how to fix it, they're usually wrong" When renowned actor and comedian Bill Hader made this comment, he wasn't necessarily thinking about product development or engineering. Yet, this concept maps well onto those domains, serving as a valuable lesson for everyone from young product developers to seasoned engineers. At the heart of this idea is the recognition that feedback, particularly from users or customers, is an invaluable source of insight into problems. Users are highly adept at pointing out what's wrong or where pain exists. Their lived experience with a product or service often lends them a unique perspective, allowing them to identify issues that may not be immediately apparent to those who designed or built it. However, the translation of these problem areas into workable solutions is a skill set that resides more comfortably with the creators—the engineers and product developers. This is where the second part of Hader's observation rings true. When users propose solutions, they often reflect a personal perspective or a narrow view of the problem, unaware of technical complexities, overarching product strategy, or design constraints. We might cringe when we hear, "we just went to users and asked them what they wanted." This approach, although seemingly customer-centric, can lead to misguided efforts and misplaced resources. It risks being swayed by articulate or loud voices, and not by genuine, widespread needs. It's crucial to take a step back and reconsider how we approach and utilize feedback. Product teams and engineers should listen attentively to the problems users describe, then apply their professional knowledge and expertise to devise appropriate solutions. This ensures that we are addressing real issues in the most efficient and effective way, driving innovation rooted in user needs while retaining a firm grasp on feasibility and strategic alignment. This principle is perhaps more nuanced in the field of engineering. Unlike the arts, engineering leans towards empirical, often quantifiable solutions. There are standards, best practices, and established methodologies that provide guidelines. Still, the core concept remains—listen for the problem, and then employ your expertise to devise the solution. So, the next time you receive feedback, remember: focus on the issue at hand and leverage your own skills, knowledge, and creativity to find a solution. Doing so will allow you to turn insights into innovation, driving your product or project towards success. Feedback, when decoded correctly, can be one of the most powerful tools in your arsenal. #learning #productivity #product #engineering
Designing with User Feedback
Explore top LinkedIn content from expert professionals.
-
-
Getting the right feedback will transform your job as a PM. More scalability, better user engagement, and growth. But most PMs don’t know how to do it right. Here’s the Feedback Engine I’ve used to ship highly engaging products at unicorns & large organizations: — Right feedback can literally transform your product and company. At Apollo, we launched a contact enrichment feature. Feedback showed users loved its accuracy, but... They needed bulk processing. We shipped it and had a 40% increase in user engagement. Here’s how to get it right: — 𝗦𝘁𝗮𝗴𝗲 𝟭: 𝗖𝗼𝗹𝗹𝗲𝗰𝘁 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 Most PMs get this wrong. They collect feedback randomly with no system or strategy. But remember: your output is only as good as your input. And if your input is messy, it will only lead you astray. Here’s how to collect feedback strategically: → Diversify your sources: customer interviews, support tickets, sales calls, social media & community forums, etc. → Be systematic: track feedback across channels consistently. → Close the loop: confirm your understanding with users to avoid misinterpretation. — 𝗦𝘁𝗮𝗴𝗲 𝟮: 𝗔𝗻𝗮𝗹𝘆𝘇𝗲 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 Analyzing feedback is like building the foundation of a skyscraper. If it’s shaky, your decisions will crumble. So don’t rush through it. Dive deep to identify patterns that will guide your actions in the right direction. Here’s how: Aggregate feedback → pull data from all sources into one place. Spot themes → look for recurring pain points, feature requests, or frustrations. Quantify impact → how often does an issue occur? Map risks → classify issues by severity and potential business impact. — 𝗦𝘁𝗮𝗴𝗲 𝟯: 𝗔𝗰𝘁 𝗼𝗻 𝗖𝗵𝗮𝗻𝗴𝗲𝘀 Now comes the exciting part: turning insights into action. Execution here can make or break everything. Do it right, and you’ll ship features users love. Mess it up, and you’ll waste time, effort, and resources. Here’s how to execute effectively: Prioritize ruthlessly → focus on high-impact, low-effort changes first. Assign ownership → make sure every action has a responsible owner. Set validation loops → build mechanisms to test and validate changes. Stay agile → be ready to pivot if feedback reveals new priorities. — 𝗦𝘁𝗮𝗴𝗲 𝟰: 𝗠𝗲𝗮𝘀𝘂𝗿𝗲 𝗜𝗺𝗽𝗮𝗰𝘁 What can’t be measured, can’t be improved. If your metrics don’t move, something went wrong. Either the feedback was flawed, or your solution didn’t land. Here’s how to measure: → Set KPIs for success, like user engagement, adoption rates, or risk reduction. → Track metrics post-launch to catch issues early. → Iterate quickly and keep on improving on feedback. — In a nutshell... It creates a cycle that drives growth and reduces risk: → Collect feedback strategically. → Analyze it deeply for actionable insights. → Act on it with precision. → Measure its impact and iterate. — P.S. How do you collect and implement feedback?
-
Let's face it: most user interviews are a waste of time and resources. Teams conduct hours of interviews yet still build features nobody uses. Stakeholders sit through research readouts but continue to make decisions based on their gut instincts. Researchers themselves often struggle to extract actionable insights from their conversation transcripts. Here's why traditional user interviews so often fail to deliver value: 1. They're built on a faulty premise The conventional interview assumes users can accurately report their own behaviors, preferences, and needs. People are notoriously bad at understanding their own decision-making processes and predicting their future actions. 2. They collect opinions, not evidence "What do you think about this feature?" "Would you use this?" "How important is this to you?" These standard interview questions generate opinions, not evidence. Opinions (even from your target users) are not reliable predictors of actual behavior. 3. They're plagued by cognitive biases From social desirability bias to overweighting recent experiences to confirmation bias, interviews are a minefield of cognitive distortions. 4. They're often conducted too late Many teams turn to user interviews after the core product decisions have already been made. They become performative exercises to validate existing plans rather than tools for genuine discovery. 5. They're frequently disconnected from business metrics Even when interviews yield interesting insights, they often fail to connect directly to the metrics that drive business decisions, making it easy for stakeholders to dismiss the findings. 👉 Here's how to transform them from opinion-collection exercises into powerful insight generators: 1. Focus on behaviors, not preferences Instead of asking what users want, focus on what they actually do. Have users demonstrate their current workflows, complete tasks while thinking aloud, and walk through their existing solutions. 2. Use concrete artifacts and scenarios Abstract questions yield abstract answers. Ground your interviews in specific artifacts. Have users react to tangible options rather than imagining hypothetical features. 3. Triangulate across methods Pair qualitative insights with behavioral data, & other sources of evidence. When you find contradictions, dig deeper to understand why users' stated preferences don't match their actual behaviors. 4. Apply framework-based synthesis Move beyond simply highlighting interesting quotes. Apply structured frameworks to your analysis. 5. Directly connect findings to decisions For each research insight, explicitly identify what product decisions it should influence and how success will be measured. This makes it much harder for stakeholders to ignore your recommendations. What's your experience with user interviews? Have you found ways to make them more effective? Or have you discovered other methods that deliver deeper user insights?
-
If you're a UX researcher working with open-ended surveys, interviews, or usability session notes, you probably know the challenge: qualitative data is rich - but messy. Traditional coding is time-consuming, sentiment tools feel shallow, and it's easy to miss the deeper patterns hiding in user feedback. These days, we're seeing new ways to scale thematic analysis without losing nuance. These aren’t just tweaks to old methods - they offer genuinely better ways to understand what users are saying and feeling. Emotion-based sentiment analysis moves past generic “positive” or “negative” tags. It surfaces real emotional signals (like frustration, confusion, delight, or relief) that help explain user behaviors such as feature abandonment or repeated errors. Theme co-occurrence heatmaps go beyond listing top issues and show how problems cluster together, helping you trace root causes and map out entire UX pain chains. Topic modeling, especially using LDA, automatically identifies recurring themes without needing predefined categories - perfect for processing hundreds of open-ended survey responses fast. And MDS (multidimensional scaling) lets you visualize how similar or different users are in how they think or speak, making it easy to spot shared mindsets, outliers, or cohort patterns. These methods are a game-changer. They don’t replace deep research, they make it faster, clearer, and more actionable. I’ve been building these into my own workflow using R, and they’ve made a big difference in how I approach qualitative data. If you're working in UX research or service design and want to level up your analysis, these are worth trying.
-
Navigating the product development process is a bit like guiding a frog through its habitat - close observation and adaptation to feedback are essential. This approach not only aligns products with user needs but also significantly improves resource efficiency. 𝐓𝐡𝐞 𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐜𝐞 𝐨𝐟 𝐅𝐞𝐞𝐝𝐛𝐚𝐜𝐤 𝐢𝐧 𝐏𝐫𝐨𝐝𝐮𝐜𝐭 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭 - MDP Creation: Start by launching a Minimal Desirable Product - this serves as your basic model to initiate user interaction. - User Observation: Monitor how users interact with the MVP. Do they find it intuitive? Are there unforeseen issues? - Feedback Collection: Actively seek user feedback through surveys, direct observations, and interviews to gather valuable insights for improvement. - Iterative Design: Refine and enhance the product based on this feedback, focusing on features that genuinely add value. - Continuous Improvement: Maintain a cycle of feedback and improvement, ensuring the product remains relevant and effective over time. 𝐀𝐛𝐨𝐮𝐭 42% 𝐨𝐟 𝐬𝐭𝐚𝐫𝐭𝐮𝐩𝐬 𝐅𝐀𝐈𝐋 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐨𝐟 𝐧𝐨𝐭 𝐢𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐧𝐠 𝐦𝐚𝐫𝐤𝐞𝐭 𝐟𝐢𝐭 𝐫𝐞𝐬𝐞𝐚𝐫𝐜𝐡 𝐢𝐧𝐬𝐢𝐠𝐡𝐭𝐬 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞𝐢𝐫 𝐩𝐫𝐨𝐝𝐮𝐜𝐭 𝐝𝐞𝐬𝐢𝐠𝐧 - High Failure Rates: According to CB Insights, one of the top reasons startups fail is a lack of market need for their product. About 42% of startups cited "no market need" for their product as the primary reason for their failure. - Wasteful Spending: Harvard Business Review highlights that many companies waste money developing features that users don’t want. Studies suggest that approximately 35% of features in a typical system are never used, and around 19% are rarely used. 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 𝐨𝐟 𝐚 𝐅𝐞𝐞𝐝𝐛𝐚𝐜𝐤-𝐎𝐫𝐢𝐞𝐧𝐭𝐞𝐝 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡 - Increased User Satisfaction: Products developed with user input are more likely to meet the actual needs and preferences of the target audience. - Cost Efficiency: Reducing time spent on unwanted features saves money and directs resources towards more impactful developments. - Enhanced Adaptability: A feedback loop facilitates quick pivots and adjustments, which is crucial in the fast-paced market environments. 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬 - Continuous Commitment: Integrating continuous user feedback requires dedication and can be resource-intensive. - Handling Negative Feedback: Developers must be prepared to receive and constructively use negative feedback, which can sometimes lead to significant changes in the project scope. 🔄 How do you integrate user feedback into your product development process? What lessons have you learned from observing user interaction with your products? #innovation #technology #future #management #startups
-
Lyft knew they had a problem. Only 5.6% of its users are over 65, and those users are 57% more likely to miss the ride they ordered. So, Lyft created Silver – a special app version for seniors. But why create a separate app when these improvements would benefit all users? The curb-cut effect is real. Features designed for wheelchair users ended up helping parents with strollers, travelers with luggage, and delivery workers with carts. The features in Lyft's senior-friendly app wouldn't only benefit older riders: 💡The 1.4x larger font option? Great for bright sunlight, rough rides. 💡Simplified interface? Less cognitive load for all of us. 💡Live help operators? Great for anyone when there's a problem. 💡Select preference for easy entry/exit vehicles? Not everyone likes pickup trucks. What started as an accommodation should became a universal improvement. The most powerful insight? Designing for seniors forced Lyft to prioritize what truly matters: simplicity and ease of use. Will they leverage this for all their users? The next time someone suggests adding another button to your interface or feature to your product, consider this approach instead: sometimes the most innovative design is the one that works for everyone. Rather than creating separate "accessible" versions, what if we just built our core products to be usable by all? This is the paradox of inclusive design - what works better for some almost always works better for all. What "accessibility" feature have you encountered that actually made life better for all users? #UniversalDesign #ProductThinking #CustomerExperience
-
Sometimes it feels like UX has become a game of persona theater. We craft these nice-looking slides about “Jay, 34, coffee-loving project manager who values simplicity,” and everyone nods like we’ve uncovered deep truth. But when the design breaks or no one clicks the CTA, Jay is nowhere to be found. Let's be honest, these types of personas are often just decorative empathy. They help us feel user-centered without actually being useful when things get messy. But what if we had a cognitive map that went beyond catchy bios and actually told us how users tend to engage with complexity, multitasking, system feedback, or onboarding? That’s where a cognitive profile comes in. It doesn’t try to humanize a user, it tries to operationalize them. You’re not just looking at what a user wants, you’re understanding how they work through a product, what slows them down, what motivates them to continue, and how they adapt when things go wrong. It’s not psychology for its own sake, it’s design-ready insight. Creating a cognitive profile isn’t about running a time-consuming clinical tests. It comes from observing real behaviors across research sessions, identifying shared interaction patterns, triangulating survey or performance data, and mapping consistent mental strategies. Maybe your users frequently skip explanations, or maybe they show decision fatigue quickly after three options. Maybe they don’t trust automation unless there’s a visible “undo” feature. These patterns, gathered through mixed methods, can be framed into a practical guide that complements personas and helps the whole team see friction points before they show up in usability metrics. Let’s say you’re designing a scheduling app for community college students juggling jobs and caregiving. A persona might say they’re busy and stressed. Helpful, but vague. A cognitive profile would show this group tends to rely on short bursts of interaction, avoids multi-step flows unless guided visually, prefers certainty over optionality, and is more likely to complete tasks when there's a clear success cue. Now your research plan includes testing decision pacing, your interface reduces unnecessary choices, and your design prioritizes clarity over customization. This is where research stops being symbolic and starts being strategic. UX has spent years trying to make things simpler, but sometimes, we’ve made them too simple and non-scientific (more like an art work). In the pursuit of clarity, we’ve stripped away nuance, complexity, and the messy beauty of real human behavior. A persona can tell you someone likes coffee. A cognitive profile can tell you why they abandon your onboarding flow after ten seconds. Oversimplification might feel like focus, but it’s not insight. Oversimplify a painting and you ruin it. Do that to people, and you ruin your research!
-
I see product managers who have the title but don’t do the job. They expect customer feedback to do the job for them. Customer feedback is invaluable and incredibly important, but its only one factor in deciding what to build. Other important factors are: 📊 Data ➠ Good product managers use data to validate and prioritize the customer feedback. ➠ They dig into product analytics, user behavior metrics, and market research to ensure that the features they build will drive real value for users. ➠ Without data, it’s easy to misinterpret or over-prioritize certain customer requests. 🧏 Intuition ➠ Really strong product managers combine customer feedback and data with their intuition, which is based on experience. ➠ They’ve seen the outcomes from previous projects and can often anticipate what customers need, even if customer's themselves can’t fully articulate it. ➠ This intuition allows them to innovate and propose solutions that customers may not have envisioned but will ultimately love. 🧭 Strategy ➠ To be truly effective, product managers need to align the product with the broader company vision. ➠ They need to understand that not every customer request should be fulfilled, especially if the request doesn’t fit with where the product or company is heading. ➠ The best product managers are masters of balancing immediate user needs and future growth. 🏗️ Think of an Architect If an architect tried to accommodate every request, the result would be a chaotic, disjointed design. By following a clear blueprint and vision—while thoughtfully incorporating feedback—they create a space that is functional, beautiful, and aligned with the needs of its occupants. Like product managers, architects know that not every idea fits the plan. A real product manager listens to the feedback, but doesn’t follow it blindly. They mix it into everything else informing their decisions. Product people- What kinds of feedback do you follow vs. backburner?
-
At 50 million users, we spend 80% of our time listening to customers. At 100 users, we spent 80% of our time ignoring what customers asked for. Both strategies were right. — When we started Gamma, 80% of our effort went into one radical idea: "What if creating presentations felt effortless? Type your thoughts, skip the design entirely." The other 20%? Table stakes. Present mode. Share button. Just enough to technically qualify as a presentation tool. We were solving a universal pain point: that blank slide staring back at you, demanding design skills you don't have. As users started signing up, then churning, we shifted to 50/50: Half pushing the AI forward. Half understanding why people left. Turns out when someone can't change fonts or upload their logo, they don't stick around. Even if your AI is magic. Then we hit an inflection point: More users signing up than we knew what to do with. The 80/20 flipped completely. Today, 80% of our roadmap comes from our community and user requests. When the same friction point appears fifty times, we fix it. We listen ruthlessly. The other 20% stays reserved for bets they'd never ask for — like building websites when they asked for better slides. Most product advice says pick a lane: visionary or customer-obsessed. That's backwards. You need both. The mix just evolves: - Pre-product market fit → 80% paradigm shifts, 20% basics - Approaching PMF → 50/50 balance attack - True inflection → 80% customer pulls, 20% future bets We've built to 50 million users this way. Not by choosing between vision and feedback. By knowing when to weigh each one. Nobody asked for content-first presentations … they asked for prettier templates. Sometimes the best answer isn't giving customers what they request. It's solving the problem they didn't know how to articulate.
-
When something feels off, I like to dig into why. I came across this feedback UX that intrigued me because it seemingly never ended (following a very brief interaction with a customer service rep). So here's a nerdy breakdown of feedback UX flows — what works vs what doesn't. A former colleague once introduced me to the German term "salamitaktik," which roughly translates to asking for a whole salami one slice at a time. I thought about this recently when I came across Backcountry’s feedback UX. It starts off simple: “Rate your experience.” But then it keeps going. No progress indicator, no clear stopping point—just more questions. What makes this feedback UX frustrating? – Disproportionate to the interaction (too much effort for a small ask) – Encourages extreme responses (people with strong opinions stick around, others drop off) – No sense of completion (users don’t know when they’re done) Compare this to Uber’s rating flow: You finish a ride, rate 1-5 stars, and you’re done. A streamlined model—fast, predictable, actionable (the whole salami). So what makes a good feedback flow? – Respect users’ time – Prioritize the most important questions up front – Keep it short—remove anything unnecessary – Let users opt in to provide extra details – Set clear expectations (how many steps, where they are) – Allow users to leave at any time Backcountry’s current flow asks eight separate questions. But really, they just need two: 1. Was the issue resolved? 2. How well did the customer service rep perform? That’s enough to know if they need to follow up and assess service quality—without overwhelming the user. More feedback isn’t always better—better-structured feedback is. Backcountry’s feedback UX runs on Medallia, but this isn’t a tooling issue—it’s a design issue. Good feedback flows focus on signal, not volume. What are the best and worst feedback UXs you’ve seen?