February 2, 2026

How to Create a “What We Heard” Slide That Doesn’t Feel Fluffy

Discover how to transform generic 'What We Heard' slides into powerful tools that drive action. This guide shows research and marketing teams how to structure customer feedback with specificity, quantification, and clear implications—turning potentially fluffy summaries into strategic assets that lead to real decisions.

Articles

We've all been there. You're reviewing research findings, and you reach the dreaded 'What We Heard' slide. It's filled with vague statements, cherry-picked quotes, and generalizations that leave you wondering: "So what?" and "Says who?" These slides often feel more like creative writing exercises than meaningful insights that drive action.

But it doesn't have to be this way. 'What We Heard' slides can be powerful tools for synthesizing customer feedback and driving decision-making—if they're built correctly. Let's explore how to transform these potentially fluffy summaries into substantive, action-oriented assets.

The Problem with Traditional 'What We Heard' Slides

Traditional 'What We Heard' slides suffer from several key weaknesses:

  1. Vague generalizations: "Customers want better features" (Which customers? Which features?)
  2. Cherry-picked quotes: Highlighting the one person who said what you wanted to hear
  3. Missing quantification: No sense of how widespread a sentiment actually is
  4. No clear implications: Failing to connect findings to actions

As a result, these slides often fail to influence decision-making, because they lack credibility and actionability.

Building a 'What We Heard' Slide with Substance

1. Segment Your Findings

Instead of lumping all feedback together, organize what you heard by relevant segments:

  • By user persona: "What we heard from power users vs. occasional users"
  • By pain point: "What we heard about onboarding vs. advanced features"
  • By use case: "What we heard from marketing teams vs. product teams"

This segmentation immediately adds specificity and nuance that generic statements lack.

According to research from the Nielsen Norman Group, insights that identify specific user groups are 3x more likely to be acted upon than general observations.

2. Quantify the Feedback

Even in qualitative research, you can and should quantify patterns:

  • "7 out of 10 enterprise customers mentioned challenges with the reporting workflow"
  • "Onboarding friction was cited by 85% of new users as their primary pain point"
  • "Account management concerns emerged in 12/15 interviews with mid-market customers"

Quantification transforms vague impressions into credible findings that are harder to dismiss. As Tom Fishburne, founder of Marketoonist, notes: "The best insights combine the emotional power of stories with the credibility of numbers."

3. Use the Strength-of-Signal Framework

Not all feedback carries equal weight. Adopt a framework that communicates the strength of different signals:

Strong Signal: Consistent feedback across multiple segments with specific examples

  • Example: "Strong signal: Users across all segments struggle with the export function, specifically citing format compatibility issues and processing time"

Medium Signal: Patterns emerging in specific segments or use cases

  • Example: "Medium signal: Enterprise users specifically request more granular permission controls, with 5/8 mentioning role-based access as a priority"

Weak Signal: Interesting but isolated feedback worth monitoring

  • Example: "Weak signal: Two power users suggested integration with specialized tools (Figma, Miro) for their workflow"

4. Connect Findings to Implications

A truly valuable 'What We Heard' slide doesn't just report findings—it translates them into clear implications:

Finding: "8/10 users struggled to find the analytics dashboard after our navigation redesign"

Implication: "Our new navigation system is hiding critical functionality, suggesting we need to reconsider the information architecture or add contextual guidance"

This connection transforms passive reporting into active guidance for decision-makers.

A Template That Works

Here's a simple but effective template for 'What We Heard' slides that avoid fluffiness:

What We Heard: [Topic/Feature/Process]Strong Signals (widespread, consistent feedback):- Finding: [Specific observation with numbers]  Implication: [What this means for our strategy/product]Medium Signals (emerging patterns):- Finding: [Specific observation with numbers]  Implication: [What this might mean if confirmed]Segment Differences:- [Segment A] expressed [specific concern] (7/10 interviews)- [Segment B] prioritized [specific need] (6/8 interviews)Recommended Next Steps:1. [Action item based on strongest signals]2. [Research question to resolve uncertainty]

Real-World Example

Let's look at how this might work in practice for a SaaS product team:

Fluffy Version:
"Users found our pricing page confusing and expressed interest in more flexible options."

Substantive Version:
"What We Heard: Pricing Page Feedback

Strong Signal: 18/20 prospects couldn't determine which plan was right for their needs

  • 12/20 specifically mentioned confusion about feature differences between tiers
  • Enterprise prospects (5/5) asked for clearer volume discounting information
    Implication: Our current pricing structure and presentation creates a direct barrier to conversion

Medium Signal: Mid-market customers (7/10) expressed interest in a usage-based option
Implication: We should explore a hybrid pricing model for this segment, as it may address their scaling concerns

Recommended Next Steps:

  1. Redesign pricing comparison table with clearer feature differentiation
  2. A/B test a simplified three-tier model vs. current four-tier structure
  3. Develop prototype of usage-based option for mid-market testing"

Making Your Slides Visually Effective

Presentation matters too. Consider these design principles:

  1. Use visual hierarchy to distinguish between strong and weak signals
  2. Incorporate simple data visualization to show the prevalence of different themes
  3. Include selective verbatim quotes as supporting evidence, not as the main findings
  4. Create visual connections between findings and implications

According to research by 3M, visuals are processed 60,000 times faster than text, making well-designed visual elements crucial for complex information.

When to Create These Slides

The best 'What We Heard' slides aren't created after all the research is complete. Instead:

  1. Start the template before research begins with your key questions
  2. Update it iteratively as patterns emerge
  3. Validate your synthesis with stakeholders before finalizing
  4. Revisit and refine as additional data becomes available

This iterative approach ensures your findings stay grounded in actual data rather than post-hoc interpretations.

Conclusion: From Fluffy to Foundational

'What We Heard' slides don't need to be exercises in creative writing or vague summarization. When built with specificity, quantification, and clear implications, they become foundational tools for decision-making.

By applying these principles, you transform potentially fluffy feedback summaries into strategic assets that drive action. The next time you need to communicate research findings, remember that what makes these slides valuable isn't just what you heard—it's how you structure, quantify, and translate that information into meaningful direction.

Your stakeholders will thank you for slides that don't just sound good, but actually help them make better decisions.

Stay informed with the latest articles.

More Articles
More Articles
White Right ArrowWhite Right Arrow