The New Hire Test for Success

by Kristen DeLap


What would success look like if we had to explain it to a new hire?

When someone new joins the team, they often ask deceptively simple questions.
“How do we know if we’re doing well?”

On the surface, that question is about clarity.
At a deeper level, it’s about coherence.

Having a new team member is a quiet forcing function that you can use to accelerate the team’s coherence. New hires don’t know your history, and they don’t know your internal shorthand. But maybe most importantly here, they don’t know which metrics are sacred and which are ceremonial. 

John Cutler often critiques “success theater”, where vanity metrics or dashboards look like they are creating signal, but actually generate noise. A new team member won’t know the difference right away. They’ll take what we present at face value.

Many teams can list metrics. Fewer can describe what winning actually feels like. And if explaining success requires a 40 slide deck, or a dozen KPIs and OKRs layered together, that’s a signal worth noticing. As Richard Rumelt has written, “Good strategy is simple enough to explain, but disciplined enough to execute.” If you can’t explain success simply, there’s a good chance your strategy is either fragmented or overly abstract.


STAND-UP EXERCISE

Use the idea of a new team member as a diagnostic lens.

Take a few moments asynchronously to write down how you would explain success of the team to a fictional new hire. Don’t just think about metrics, but also the meaning. What would you have them pay attention to? What matters most?

Then bring those explanations together and look for themes. Did different practice areas have distinct definitions? Did folks with varied lengths of tenure on the team explain it differently?

This exercise isn’t just about onboarding; it is about potential misalignment within the team. The places where definitions diverge are often the places where tension quietly lives. Use this to come together on a shared definition of success for your team and for your product. If success can’t be explained coherently, it can’t be protected intentionally.

3 team members welcoming new team member in exaggerated illustration style.

Dimensions of Quality

by Kristen DeLap


Most product teams care deeply about quality. But when timelines tighten or pressure increases, quality often becomes a vague, emotionally loaded concept. One person worries we’re moving too fast. Another one worries we’re overthinking. Conversations stall because we’re using the same word to mean different things.

Quality isn’t a single standard you either meet or miss. It’s a set of attributes that compete for attention. And different moments in a product’s life put different kinds of quality at risk.

Quality Isn’t One Thing

One of the most useful ways to talk about quality comes from Ami Vora, who describes four distinct dimensions teams are constantly balancing:

  • Performance — how fast, responsive, and reliable the product feels

  • Bugs — correctness, stability, and freedom from defects

  • Completeness — whether the solution actually solves the full problem

  • Consistency — coherence across flows, surfaces, and behaviors

Every team makes tradeoffs across these dimensions. That’s not a failure; it’s reality. The problem arises when those tradeoffs are implicit. When no one names what’s most fragile, teams start talking past each other, and quality debates become personal instead of practical.

When quality discussions stay abstract, they tend to escalate quickly. Engineers may feel asked to cut corners. Designers may feel pressure to ship something unfinished. PMs may feel caught between speed and responsibility.

But when a team can say, “This quarter, completeness is the riskiest thing for us,” or “Performance is the edge we can’t afford to dull,” something shifts. The conversation becomes about judgment, not virtue. It frees you up to focus on intent, not blame. Naming risk creates shared context. It gives teams language to explain decisions and empathy for why others feel tension.

Where Quality Risk Shows Up

You can usually spot quality risk by paying attention to where friction accumulates.

  • Where are we cutting scope, and what kind of quality does that affect? Are we creating an experience that feels partial or awkward to a user?

  • Where are we deferring work, and what assumptions are we making about impact? Are our inconsistencies eroding trust with other teams?

  • Where are customer complaints, internal friction, or workarounds starting to cluster? Are there bugs that felt acceptable individually but now feel risky in aggregate?

Each of these points to a different dimension of quality, and recognizing which one matters most at this moment is what enables better decisions. The goal isn’t to eliminate risk. It’s to make it visible. When teams agree on which kind of quality is most fragile, they can protect it more deliberately, communicate tradeoffs more clearly, and move faster with less friction.


STAND-UP EXERCISE
In your next stand-up, on a shared workspace ask the team to vote on one question:

Which quality dimension feels most at risk right now?
Performance · Bugs · Completeness · Consistency

Notice where there’s alignment, or surprise. Invite people to talk about why they voted the way they did. Listen for patterns across roles or perspectives. Engineers, designers, and PMs often see different risks, and all of them are valid signals.

Going forward for this sprint / quarter / release explicitly name which quality dimension you’re prioritizing, and which one you’re consciously putting at risk.

Miro board of dot voting with sections for performance, bugs, completeness and consistency

Tooling (and AI, of course)

by Kristen DeLap


The market is overflowing with tools - AI assistants, collaboration platforms, analytics dashboards, niche SaaS products (with AI integrated!) for every imaginable workflow. It’s tempting to try them all. But tooling is not strategy. A good tool accelerates existing strengths; a poor one multiplies inefficiencies.

Some organizations love to collect tools like shiny objects. Every pain point comes with a new platform, subscription, or “quick fix.” And of course AI has entered the landscape by force, as chat but also wizards within other platforms. Some teams are being mandated to be using AI, so that the business is not “left behind”. The problem: tools don’t solve problems. People do. Tools only accelerate (or complicate) the work depending on how they’re introduced and adopted.

Effective teams treat tools as part of their operating model, not as shiny objects. They introduce them intentionally, guided by a few questions:

  1. Fit — Does the tool align with the way people already work? If it requires people to constantly context-switch or feels like extra work, adoption will die fast. Tools should dissolve into existing habits, not demand entirely new ones.

  2. Friction — Does it remove barriers or add new ones? Does it work for everyone on the team or just a subset of folks?

  3. Focus — Is it solving the right problem, or just a problem? Does this distract from what we should be actually working on?

  4. Flexibility — What’s the plan if the tool doesn’t work out? Too often, teams get stuck with tools that don’t scale or can’t integrate. Part of “trying something new” should always include the question: If this doesn’t work, how do we exit?

Organizations that answer these questions up front could save themselves months of rework and resistance.

Too often, teams focus on what tools to use instead of how to use tools well. A team that doesn’t know how to run effective meetings won’t suddenly become effective with an AI note-taker. A company that avoids tough prioritization decisions won’t magically improve by adding another project management suite.

The tools that truly stick for your team might not be the buzzy ones. FigJam for collaboration. Miro for brainstorming. ContentSquare for understanding behavior. Yes, ChatGPT for drafting and discovery. But sometimes a shared doc and a standing meeting are still the most powerful tools you can have.

The best teams don’t chase every new tool. They learn how to audit, experiment, and fold the right ones into their culture. That’s how tools stop being shiny objects and start being leverage.


STAND-UP EXERCISE

At your next team stand-up, run a quick tooling audit together:

  • List the top 3–5 tools your team uses daily.

  • For each tool, ask:

    • How does this fit into our flow of work?

    • What friction does it remove? What friction does it add?

    • Are we using it to solve the right problems?

  • Choose one tool to experiment with improving. Compare notes on how each other uses it. Does someone need more training? Are there ways to be using it more effectively? Simplifying? Or a need to sunset it or an overlapping tool?

The goal isn’t to chase the next new platform. It’s to ensure the tools in use are actually serving the team, the process, and the outcomes.