Are You "Cracked"? What Product Leaders Need to Know About AI Right Now

Last week Enrich brought together a group of product and engineering executives over dinner to talk honestly about how AI is reshaping the work of building products. The conversation, facilitated by product leaders Michael Sippey and Sarah Bernard, covered everything from how to evaluate your team's AI fluency, to the existential question of whether AI will eventually manage people. Here are our key takeaways.

The New Benchmark: Are Your People "Cracked"?

Michael opened the conversation with a word that's started circulating among AI-forward teams: cracked. As in, how AI-forward is this person? How deeply are they using AI in their actual work, not just experimenting with it on the side?

The hypothesis on the table was blunt: product managers who aren't deeply engaged with AI are losing leverage at a rate that could make them obsolete. That's a strong claim, but the examples backed it up. Companies like Meta and Shopify have already built internal token usage dashboards to track and incentivize AI adoption across their organizations. Token usage isn't a perfect signal. The group agreed it becomes less meaningful after roughly three months as a proxy metric, but the fact that these companies are measuring it at all tells you something about how seriously they're taking adoption.

Beyond Token Counting: A Hierarchy for AI Maturity

If token dashboards are a blunt instrument, what's a better framework? The group surfaced a four-level hierarchy, attributed to Brian Chesky at Airbnb, that maps how teams progress in their use of AI:

  • Level 1: Completing one-time tasks

  • Level 2: Deploying tasks into the future (automation, recurring workflows)

  • Level 3: Building the infrastructure that enables AI to operate at scale

  • Level 4: Autonomous agents working toward objectives with minimal human oversight

Most teams are still at Level 1 or 2. Level 3 requires real infrastructure investment, and Level 4 is where the most competitive leverage will eventually live. But it's also the hardest to reach. The practical takeaway is don't just track whether people use AI. Understand what level they're operating at, and build toward the next one.

One of the most effective tactics shared for accelerating team adoption is to show people what "good" looks like. Have top performers demonstrate real, deep AI usage on actual projects, not sanitized demos. Peer visibility changes behavior faster than policy.

The Talent Question: Native vs. Experienced

One of the liveliest debates of the evening centered on hiring. Should you prioritize AI-native junior talent who've grown up building with these tools, or experienced senior operators who can manage teams and make judgment calls?

There was no clean consensus, but the group coalesced around what to look for regardless of level: curiosity, willingness to experiment and fail, and most critically, not holding onto old ways of working. That last trait turned out to be the real filter. AI adoption isn't primarily a skills problem; it's a mindset problem.

Shopify's recent announcement to scale its intern program from roughly 100–200 to 1,000 people, specifically focused on AI-native development, signals where at least one major company is placing its bets. Whether that's the right move for every organization is debatable, but the directional shift is worth noting.

Taste Is the New Moat

If AI can generate a first draft of almost anything, what becomes the scarce resource? The group's answer was taste.

Being truly "cracked" at AI work isn't about prompting volume. It's about the ability to take AI-generated output, what sometimes can be "slop," and refine it into something genuinely good. That requires judgment, discernment, and a deep understanding of quality. And therein lies a tension. If AI handles more and more of the creation, does the craft knowledge required to evaluate it slowly erode?

This concern came up most sharply from the design perspective. Deep craft knowledge, the kind that lets you immediately recognize when something is off,  may be harder to develop if practitioners spend less time doing the work by hand. It's a long-term risk worth watching.

On the flip side, AI is unlocking product ideas that would never have made it off the backlog before. One example from the room was context-sensitive logo colors, a feature too niche to justify engineering time in a world of human developers, but entirely feasible with AI. The implementation bar for low-priority ideas just dropped significantly.

One pointed critique of current AI models is that they're too sycophantic. They confirm rather than challenge. The ideal AI collaborator, the group agreed, should be "one step ahead" and able to anticipate where a user is going, then helpfully redirect them before they head somewhere unproductive.

Ship Now, Improve Later

A recurring strategic tension is how much should you invest in UX and product design around today's AI capabilities, knowing that the models will be substantially better in six months?

The group's consensus was clear, you can't wait. The next model will always be better. If you hold your product back waiting for ideal conditions, a competitor will ship something good enough right now and own the customer relationship. Ship with what's available, set appropriate expectations with customers (including building in human review loops where accuracy matters), and iterate.

What AI Is Actually Good For (and What It Isn't)

The conversation surfaced several compelling real-world applications, alongside an honest accounting of limitations.

On the capability side, a gaming studio used AI to simulate 800 gameplay sessions across 8 player archetypes in just four hours, a process that would have taken weeks of human research. The e-commerce space is exploring synthetic users trained on browsing behavior to reduce expensive A/B testing cycles. Some are even simulating upcoming meetings with AI stand-ins for key participants to stress-test their preparation.

But the group was careful not to oversell it. One useful framing is that AI functions like a concave mirror, it reflects and focuses existing knowledge rather than generating genuinely new insights. Synthetic research can accelerate, but it lacks the empathy layer of real human observation. When the stakes are high or the situation is novel, human judgment still matters.

The proposed rule of thumb: AI probably covers 80% of standard, repeatable work well. The remaining 20%, the outliers, the breakthroughs, the edge cases that don't fit patterns, those still need human creativity and judgment.

What Comes Next

The dinner ended with a few forward-looking observations.

  • Products will become more customizable as AI capabilities expand. But most customers, the group argued, aren't buying raw tools. They're buying expertise and know-how. The companies that win won't just build flexible AI systems; they'll be the ones that encode deep domain knowledge into those systems in ways that are hard to replicate.

  • APIs and integrations will increasingly reveal how customers actually use products, not what they say they do. That behavioral data becomes a strategic asset.

  • And the uncomfortable question at the end of the evening was: at what point would you be comfortable reporting to an AI manager? Nobody had a definitive answer. But the fact that it's a serious question now, rather than a hypothetical, tells you something about the pace of change.

This dinner was hosted by Enrich, a private network for product and technology leaders. Check out upcoming events here, or apply to join here.

Next
Next

What a Lawyer-Turned-Founder Learned About Building in the AI Era