<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Lean Product Growth: AI Notes]]></title><description><![CDATA[How AI creates value in products and organizations — and the complexity that comes with adopting it well.]]></description><link>https://www.enlighten.services/s/ai-notes</link><generator>Substack</generator><lastBuildDate>Fri, 15 May 2026 23:52:51 GMT</lastBuildDate><atom:link href="https://www.enlighten.services/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[M Stojanovski]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[leanproductgrowth@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[leanproductgrowth@substack.com]]></itunes:email><itunes:name><![CDATA[Marina]]></itunes:name></itunes:owner><itunes:author><![CDATA[Marina]]></itunes:author><googleplay:owner><![CDATA[leanproductgrowth@substack.com]]></googleplay:owner><googleplay:email><![CDATA[leanproductgrowth@substack.com]]></googleplay:email><googleplay:author><![CDATA[Marina]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Responsible AI in Product: Where AI Should Not Decide]]></title><description><![CDATA[The question has shifted.]]></description><link>https://www.enlighten.services/p/responsible-ai-in-product-where-ai</link><guid isPermaLink="false">https://www.enlighten.services/p/responsible-ai-in-product-where-ai</guid><dc:creator><![CDATA[Marina]]></dc:creator><pubDate>Wed, 22 Apr 2026 09:44:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!iRJ5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The question has shifted. </p><p>It is no longer <em>Whether to use AI</em>. </p><p>The question now is </p><ul><li><p><em>Where not to use it? </em></p></li><li><p><em>Who owns the outcome when something goes wrong?  </em></p></li><li><p><em>Who decides how much uncertainty is acceptable?</em></p></li></ul><p>These are not edge cases. They are the new core of building products <strong>responsibly</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iRJ5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iRJ5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!iRJ5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!iRJ5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!iRJ5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iRJ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1788208,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.enlighten.services/i/193884302?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iRJ5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!iRJ5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!iRJ5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!iRJ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dd35d02-ed3d-4358-8dea-0ec20882c407_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>When a recommendation becomes a liability</strong></h3><p>A team adds an AI recommendation layer to help users triage issues faster. Instead of reviewing everything manually, the product surfaces a suggested next step. It reduces noise. Users love it.</p><p>Then one user trusts it a little too much. They delay action resulting in escalated issue. A contractual SLA is breached.</p><p>Suddenly the conversation is no longer about whether the feature is useful. It is about who is responsible. Product points at Engineering. Engineering points at the model. There is no clear answer because nobody asked the right question upfront.</p><p>The right question was not &#8220;is the model good enough?&#8221; The model was fine.</p><p>The missing question was: <em>What happens when a user trusts the system completely, in a situation we never designed for?</em></p><h3><strong>Accuracy is necessary. It is not enough.</strong></h3><p>Asking whether the model is accurate is not the wrong debate. It is the right starting point. You absolutely need to know how the model performs or how often it fails.</p><p>But accuracy only is not enough. Accuracy tells you how the model behaves in aggregate. It tells you nothing about what happens at the boundary &#8212; when the output is confidently wrong and the user has no way to tell the difference.</p><p>A great model can still cause real harm when it meets undefined governance. When nobody decided how uncertainty should be communicated. When the design made a suggestion look like a fact.</p><p>That is the boundary question. And it is a genuinely different question from model quality.</p><h3><strong>When one function dominates, the product breaks differently</strong></h3><p>AI features tend to reflect who led them. And every function, leading alone, has a blind spot.</p><p>When <em>Product</em> leads, things get cleaner and faster. But product thinking is built for the typical user. It misses the edge &#8212; the person who trusts the recommendation at exactly the wrong moment, in exactly the wrong context. Smoother is not the same as safer.</p><p>When <em>Engineering</em> leads, the system gets more cautious about where probabilistic behavior is allowed. Good. But rigour without usability doesn&#8217;t solve the user problem. You can build something technically correct that leaves the actual problem unanswered.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hOcV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hOcV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png 424w, https://substackcdn.com/image/fetch/$s_!hOcV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png 848w, https://substackcdn.com/image/fetch/$s_!hOcV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png 1272w, https://substackcdn.com/image/fetch/$s_!hOcV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hOcV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png" width="1224" height="298" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:298,&quot;width&quot;:1224,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:80395,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.enlighten.services/i/193884302?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hOcV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png 424w, https://substackcdn.com/image/fetch/$s_!hOcV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png 848w, https://substackcdn.com/image/fetch/$s_!hOcV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png 1272w, https://substackcdn.com/image/fetch/$s_!hOcV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa567794f-21d7-4767-a8e1-d08c74d9f1b4_1224x298.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Each function has a blindspot</figcaption></figure></div><p>When <em>Design</em> leads, the interface gets clearer and more intuitive. But design thinking is focused on comprehension &#8212; can the user read this, find this, act on this. It doesn't naturally ask a different question: <em>Is this what the system knows or what it thinks?</em> That distinction is easy to miss in a design review and very hard to explain after an incident.</p><p>When <em>Legal and Compliance</em> show up just before product launch, the organisation finds out too late that a useful feature also creates exposure. The question becomes &#8220;<em>How do we add a disclaimer?</em>&#8221; when it should have been &#8220;<em>Should this layer be probabilistic at all?</em>&#8221;</p><p>No single function sees the full picture. The failure modes are different. The protection has to be shared.</p><h3><strong>Four layers, one rule</strong></h3><p>Not all parts of a product carry the same risk. The mistake is treating AI as a single capability and spreading it evenly. A more useful way to think about it is through layers, each with a different tolerance for uncertainty.</p><p>The <strong>Truth layer</strong> is the foundation. Source data, user input, explicit rules, system state. This stays deterministic. No AI interpretation, no probabilistic behaviour. Users must always be able to come back here and verify what is actually true.</p><p>The <strong>Interpretation layer</strong> is where AI earns its first real role &#8212; summarising, classifying, grouping, making complexity easier to navigate. Probabilistic behaviour is fine here, as long as the output stays traceable back to the truth.</p><p>The <strong>Advisory layer</strong> is where the product starts recommending, proposing, surfacing likely answers. This is where most of the value lives. But it is also where <em>the risk climbs fast.</em> The moment users start acting on AI output, the design bar changes completely.</p><p>The <strong>Execution layer</strong> is where the system acts &#8212; triggers workflows, changes state, sends messages on your behalf. This should stay deterministic unless you have defined, and explicitly agreed on, very tight boundaries around where AI can operate.</p><p>Running through all four &#8212; not as a layer itself, but as the condition that makes the others trustworthy, is <em>Transparency &amp; User Control</em>. Users need to know what is original, what is inferred, what is proposed, and what they can still change - they remain in control.</p><blockquote><p><strong>Transparency</strong> is not a design choice. It is what makes probabilistic AI usable.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3wJ7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3wJ7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png 424w, https://substackcdn.com/image/fetch/$s_!3wJ7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png 848w, https://substackcdn.com/image/fetch/$s_!3wJ7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png 1272w, https://substackcdn.com/image/fetch/$s_!3wJ7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3wJ7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png" width="1172" height="892" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:892,&quot;width&quot;:1172,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:207707,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.enlighten.services/i/193884302?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3wJ7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png 424w, https://substackcdn.com/image/fetch/$s_!3wJ7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png 848w, https://substackcdn.com/image/fetch/$s_!3wJ7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png 1272w, https://substackcdn.com/image/fetch/$s_!3wJ7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1219565f-7b15-4d16-87ca-3fa88de9e62c_1172x892.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></blockquote><p><strong>How this looks like in practice</strong></p><p>Take a clinical decision support tool used by a hospital triage team. Nurses and junior doctors are working through a queue. The system has access to patient history, recent vitals, lab results, current medication. It is trying to help them move faster without missing something critical.</p><p>The patient record stays untouched. Every data point is sourced, timestamped, auditable. That is the truth layer &#8212; nobody interprets it away. If something looks wrong, you go back to the source.</p><p>The model then reads across that data and flags patterns, e.g. <em>these inflammation markers combined with this medication history have preceded serious complications in similar patients.</em> It is not telling anyone what to do. It is making a connection that a tired clinician at a night shift might have missed. That is the interpretation layer.</p><p>The product then surfaces a recommendation: <em>this patient may need escalation before the next scheduled review</em>. The system shows why the case was flagged, which signals support the recommendation, and what information may be incomplete. The clinician checks the record and makes the call. That is the advisory layer &#8212; the human still decides, but with better information.</p><p>Some things the system could automate &#8212; sending an alert, flagging for senior review, updating the patient status. But in this context, those actions stay deterministic and tightly defined. The system can trigger an alert when vitals cross a hard threshold. It cannot decide on its own that a patient needs escalation based on a probabilistic read. That boundary is not a technical limitation. It is a design choice about where AI is allowed to act. That is the execution layer.</p><p>Running through all of it is transparency. The clinician can see why the system flagged this patient &#8212; which data points, which pattern. It&#8217;s not a black box recommendation, but a visible chain of reasoning. That is what makes it possible to trust the output enough to act on it, or to override it with confidence.</p><h3><strong>A few rules to keep in mind</strong></h3><p><em><strong>Not every layer should be probabilistic.</strong></em> The closer you are to truth, control, or irreversible consequence, the more deterministic you need to stay. AI does not belong everywhere just because it technically can go everywhere.</p><p><em><strong>Let the cost of being wrong set the constraints.</strong></em> In some flows a weak recommendation creates minor friction. In others it causes huge escalations, contractual exposure, or trust that takes a long time to rebuild. The downside should determine the decision.</p><blockquote><p><em>A recommendation that looks like a decision is a design failure.</em></p></blockquote><p><em><strong>There is a real difference between helping users see more clearly and quietly reshaping what they see.</strong></em> Summarising, grouping, flagging &#8212; that is interpretation. Suppressing, reordering, deciding what matters &#8212; that is something else. The line between them is easy to miss in a design review and very hard to explain after an incident.</p><p><em><strong>Transparency and user control are not nice to have.</strong></em> They are the product. Without them, even well-designed AI can mislead. With them, even imperfect AI can be genuinely useful.</p><p><em><strong>Define the boundary before launch.</strong></em> What is the AI allowed to influence? How is uncertainty communicated? Where are the override paths? These need explicit answers &#8212; not team assumptions that nobody ever wrote down.</p><h3><strong>Closing Thoughts</strong></h3><p>The maturity test for AI in product is not how many features you can ship.</p><p>It is whether your organisation can answer &#8212; before anything goes wrong &#8212; where AI helps, where it stays advisory, and where it should not decide at all. Who defined that line? Who signed off on it?</p><p>Those are not model questions. They are leadership questions.</p><p>The teams that answer them early are not being slow or overcautious. They are doing the work that makes everything else trustworthy.</p><blockquote><p><em>The quality of an AI product is not determined only by what the model can do. It is determined by whether the product makes the limits of that intelligence impossible to misunderstand.</em></p></blockquote><p></p><p><em><strong>Enjoyed this Article?</strong></em></p><p><em>Subscribe to Lean Product Growth for regular updates on building and scaling a successful product organization. Insights, strategies, and actionable tips&#8212;delivered straight to your inbox.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.enlighten.services/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.enlighten.services/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The AI Playbook for CPOs: Where to Start, What to Avoid, How to Win]]></title><description><![CDATA[Practical principles and decision patterns to help you evaluate whether an AI initiative is worth pursuing]]></description><link>https://www.enlighten.services/p/the-ai-playbook-for-cpos</link><guid isPermaLink="false">https://www.enlighten.services/p/the-ai-playbook-for-cpos</guid><dc:creator><![CDATA[Marina]]></dc:creator><pubDate>Thu, 22 May 2025 18:44:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Yd-L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Yd-L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Yd-L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Yd-L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Yd-L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Yd-L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:131889,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.enlighten.services/i/163818034?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Yd-L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Yd-L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Yd-L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Yd-L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa827783d-d842-44e7-b8d7-f6392f7b7ff8_1920x1080.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Last week I attended a CPO summit.</p><p>There wasn&#8217;t a single session, hallway chat, or panel that didn&#8217;t mention AI. Whether the topic was product operations, UX design, growth, or company vision &#8212; AI came up. Not as a distant trend, but as an urgent reality.</p><p>The transition is real.</p><p>CPOs everywhere are under pressure to "bring AI into the product" &#8212; but too often, that leads to rushed experimentation without a clear strategy, or worse, AI features that don&#8217;t help the user or the business.</p><p>So how do you know where AI actually makes sense?<br>Where should you invest &#8212; and where should you hold back?</p><p>What follows is a set of practical principles and decision patterns to help you evaluate whether an AI initiative is worth pursuing &#8212; and how to make sure it actually delivers value.</p><p>Let&#8217;s dive in.</p><blockquote><p><em>If you're just beginning your AI journey, you might find this helpful: <a href="https://www.enlighten.services/p/how-to-bring-ai-into-your-product">How to Bring AI Into Your Product</a> &#8212; a guide to responsible adoption and key strategies for getting started.</em></p></blockquote><h2><strong>AI Can&#8217;t Promise Accuracy</strong></h2><p>AI models (especially language models) are inherently probabilistic. </p><p>This means that you can feed them the same input twice and get two different outputs. They don't follow hard logic rules, and they make mistakes.</p><p>This means that AI is not suitable for use cases where determinism and correctness are non-negotiable. If your product needs to deliver a precise number, a legal decision, or any financial action, AI should not be in charge.</p><p>What it's great for is helping people start faster &#8212; with a draft, a suggestion, or a "good enough" shortcut that saves time. Think for example about:</p><ul><li><p>Predicting user churn</p></li><li><p>Flagging anomalies in large datasets</p></li><li><p>Identifying patterns even in high-stakes environment like medical scans</p></li><li><p>Suggesting replies in support or sales</p></li></ul><p>These are scenarios where &#8220;close enough&#8221; adds real value &#8212; and where human review or intervention remains part of the loop.</p><p>&#9989; Use AI when the outcome can be reviewed, edited, or safely ignored<br>&#10060; Avoid AI when the output must be exact, consistent, or legally or financially binding</p><h2><strong>From Click-Driven to Conversational UX</strong></h2><p>One big shift AI introduces is how users interact with your product.</p><p>Traditional UX relies on click-driven behaviour and using buttons, forms, filters, dropdowns. This works, but it's rigid &#8212; especially for complex tasks. AI, on the other hand, enables users to express their intent in natural language:</p><p><em>&#8220;Show me my most profitable customers from Europe last quarter.&#8221;</em></p><p>This doesn&#8217;t mean every product should suddenly become a chatbot. In fact, pure chat interfaces can frustrate users. But AI lets you combine structured and conversational input. This results in shorter paths to outcomes for users.</p><p>It&#8217;s time to rethink your UX interface. The key questions to ask:</p><ul><li><p>Where are users doing repetitive or multi-step tasks just to express intent?</p></li><li><p>Can we collapse that into one input field or a flexible assistant panel?</p></li></ul><p>&#9989; Use AI to reduce friction in complex or repetitive flows<br>&#10060; Don&#8217;t over-rely on chat as the only interface &#8212; mix patterns thoughtfully</p><h2><strong>AI Accelerates Early UX Exploration</strong></h2><p>In the context of UX, one big efforts is spent on early translation of ideas into mockups, flows, or prototypes. This is where AI is starting to make a real difference.</p><p>Today&#8217;s tools like <a href="https://lovable.dev/">Lovable</a>, <a href="https://www.usegalileo.ai/explore">Galileo</a> can:</p><ul><li><p>Generate wireframes and layout suggestions from a simple feature description</p></li><li><p>Propose user flows based on goals or natural-language prompts</p></li></ul><p>This is a major unlock for founders and early-stage teams looking to go from idea to testable landing page in a matter of hours and without needing to hire a designer.</p><p>For designers, these tools act as collaborators &#8212; helping them generate alternatives, explore directions, or simply get first ideas flowing.</p><p>These tools for sure won't replace designers &#8212; but they reduce the time from idea to first draft.</p><p>&#9989; Use AI to explore and prototype faster<br>&#10060; Don&#8217;t bypass design thinking &#8212; use it to accelerate, not replace your team&#8217;s process</p><h2><strong>AI Shines in Analyst Work and Unstructured Inputs</strong></h2><p>PMs, analysts, support agents, and operations teams all spend huge amounts of time making sense of unstructured or noisy data: analysing user feedback, interviews notes, support tickets, requirements documents, etc.</p><p>These are the kinds of tasks AI handles very well:</p><ul><li><p>Summarizing NPS comments</p></li><li><p>Structuring product feedback, e.g. grouping by theme</p></li><li><p>Extracting next steps from a discovery transcript</p></li><li><p>Translating meeting notes or requirements documents into draft user stories</p></li></ul><p>In fact, tools like <a href="https://support.atlassian.com/cloud-automation/docs/use-atlassian-intelligence-with-jira-automation/">Atlassian Jira</a> already offer AI features that generate user stories or break them into subtasks automatically. This is a strong signal that this shift is already underway.</p><p>In these scenarios, AI doesn&#8217;t replace the human &#8212; but amplifies significantly their speed. It&#8217;s like a quick, tireless analyst in your team able to scan volumes of content instantly.</p><p>For CPOs, this is one of the fastest ways to create internal leverage. Instead of tasking a team with &#8220;read through 300 survey responses,&#8221; give them a model-assisted dashboard that organizes and prioritizes the results.</p><p>&#9989; Use AI to make qualitative data more actionable<br>&#10060; Don&#8217;t expect it to replace human judgment</p><h2><strong>Deep Research and Strategic Thinking</strong></h2><p>One of AI&#8217;s most valuable &#8212; and still underused &#8212; strengths is its ability to support deep research and strategic preparation.</p><p>AI can process and synthesize large volumes of fragmented, unstructured information and help you move toward strategic clarity. Think about market trends, competitor websites, internal strategy documents, customer interviews, and support tickets.</p><p>It won&#8217;t create your strategy for you. But it gets you to the thinking part faster. Instead of spending hours gathering inputs and making sense of them, AI handles this automatically, so you can focus on analysis, judgment, and decisions.</p><p>Tools like <a href="https://chatgpt.com/">ChatGPT (Pro)</a> and <a href="https://claude.ai/">Claude</a> now make deep research workflows widely accessible &#8212; even without a dedicated data or research team. Such tools can:</p><ul><li><p>Upload and analyze documents like interviews, strategy decks, and product notes</p></li><li><p>Summarize and compare long-form content across multiple files</p></li><li><p>Scan the web for relevant, publicly available information</p></li><li><p>Integrate with internal knowledge bases or systems</p></li><li><p>Connect all this information to help answer complex, strategic questions</p></li></ul><p>&#9989; Use AI to compress and structure large volumes of input<br>&#10060; Don&#8217;t rely on it to generate strategy &#8212; use it to prepare, not decide</p><h3><strong>AI Reduces Human Time, But Increases Infrastructure Cost</strong></h3><p>AI is an accelerator &#8212; but it doesn&#8217;t come for free. Many AI models, especially those used for open-ended generation (like language models that respond to prompts or create content), can be compute-intensive and drive up infrastructure costs.</p><p>But that shouldn&#8217;t be a blocker.</p><p>More companies are realizing that the return on investment often outweighs the cost. As one product leader put it:</p><blockquote><p><em>&#8220;I can now easily justify a business case for using AI with positive ROI to the CEO.&#8221;</em></p></blockquote><p>Where AI creates the most ROI is </p><ul><li><p>Ssaving hours of manual work</p></li><li><p>Improving quality of your decisions </p></li><li><p>Accelerating product adoption and improving competitiveness</p></li></ul><p>&#9989; Measure ROI and invest where you get business benefit<br>&#10060; Don&#8217;t scale AI features without tracking benefits or costs visibility</p><h3><strong>AI Raises the Bar for Security and Compliance</strong></h3><p>In highly regulated environments &#8212; like healthcare, finance, or government &#8212; introducing AI is far from straightforward. It&#8217;s not just a product decision; it&#8217;s a compliance and risk decision.</p><p>AI can raise serious concerns around:</p><ul><li><p>Data security and privacy</p></li><li><p>Model hallucination and output accountability</p></li></ul><p>This doesn&#8217;t mean AI should be off the table &#8212; but it does mean that security and privacy must be part of your AI strategy from day one.</p><p>For CPOs, that means partnering early with legal, compliance, and security teams to:</p><ul><li><p>Choose models and vendors that support on-premise or region-specific deployment if needed</p></li><li><p>Avoid passing sensitive or regulated data into prompts unless fully anonymized</p></li><li><p>Be transparent about where and how AI is used, especially in customer-facing workflows</p></li></ul><p>&#9989; Build AI features with security, privacy, and accountability in mind<br>&#10060; Don&#8217;t treat AI like &#8220;just another API&#8221; </p><h3><strong>Wrapping Up</strong></h3><p>AI isn&#8217;t magic &#8212; it&#8217;s just a different set of tools. And like any other tool, it requires judgment, experimentation, and critical thinking.</p><p>If you start testing AI in your product workflows, you might increase your confidence  &#8212; or you might find its limitations more quickly than expected. Both outcomes are useful. </p><p>But ignoring it is not an option.</p><p>Because AI is reshaping how users interact with products, how teams work, and how companies compete. It's changing expectations around speed, personalization, and productivity &#8212; whether you&#8217;re ready or not.</p><p>You don&#8217;t need to chase every trend. But you do need to understand the capabilities, limitations, and implications of AI &#8212; and make intentional decisions about where to use it, where to avoid it, and where to explore.</p><div><hr></div><p><em><strong>Enjoyed this Article?</strong></em></p><p><em>Subscribe to Lean Product Growth for regular updates on building and scaling a successful product organization. Insights, strategies, and actionable tips&#8212;delivered straight to your inbox.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.enlighten.services/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.enlighten.services/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[How to Bring AI Into Your Product and Teams]]></title><description><![CDATA[What leaders should know to get started with AI and get results.]]></description><link>https://www.enlighten.services/p/how-to-bring-ai-into-your-product</link><guid isPermaLink="false">https://www.enlighten.services/p/how-to-bring-ai-into-your-product</guid><dc:creator><![CDATA[Marina]]></dc:creator><pubDate>Thu, 03 Apr 2025 09:22:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NR0F!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NR0F!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NR0F!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NR0F!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NR0F!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NR0F!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NR0F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:98667,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.enlighten.services/i/159901681?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NR0F!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NR0F!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NR0F!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NR0F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd21e697-9d24-4c03-a12f-503f54f09eee_1920x1080.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>AI is no longer a distant trend &#8212; it&#8217;s a strategic priority.</p><p>According to <a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier">McKinsey report</a> from 2023, AI adoption has the potential to generate up to $4.4 trillion in global economic value annually. </p><p>Adoption is accelerating rapidly - from <a href="https://www.eweek.com/news/ai-generated-posts-flood-linkedin/#:~:text=There%20was%20a%20huge%20spike,the%20help%20of%20artificial%20intelligence">the explosion of AI-generated content </a>following ChatGPT&#8217;s release, to <a href="https://www.bcg.com/publications/2024/gen-ai-increases-productivity-and-expands-capabilities?utm_source=search&amp;utm_medium=cpc&amp;utm_campaign=digital-transformation&amp;utm_description=paid&amp;utm_topic=ai&amp;utm_geo=global&amp;utm_content=dsa_gen-ai-increases-productivity-and-expands-capabilities&amp;gad_source=1&amp;gclid=CjwKCAjwwLO_BhB2EiwAx2e-3xiQqOBPhcyXzqpN1ombQDuINJSHCfSy6AoGDyPHtXT4ydvIvqjoXxoCb14QAvD_BwE&amp;gclsrc=aw.ds">companies reporting tangible productivity gains</a>, and even the rise of new roles like AI consultants.</p><p>Most leaders already recognize the potential. The challenge isn&#8217;t awareness &#8212; it&#8217;s direction.</p><p>What next? Where do you begin? How do you integrate AI safely, responsibly, and in a way that drives measurable business value?</p><p>Many teams today fall into one of two common pitfalls:</p><ol><li><p><strong>&#8220;We&#8217;ll wait until the technology matures.&#8221;</strong><br>The reality? AI is already mature enough to create value.</p></li><li><p><strong>&#8220;Let&#8217;s find a way to add AI to our product.&#8221;</strong><br>Without a well-defined problem to solve, adding AI for the sake of AI is often a waste of time and damaged credibility.</p></li></ol><h2>Key Trends To Watch</h2><p>The AI landscape is evolving fast. The number of new tools emerging each month is already impossible to track. </p><p>But beneath the noise, a few key trends stand out. </p><h4><strong>1. Generative AI </strong></h4><p>Generative AI tools (like ChatGPT, Claude, and Gemini) are helping teams generate content - product management documents, UX designs, software code, test cases, etc. What&#8217;s more, these large language models can be connected to your internal knowledge bases, enabling them to generate domain-specific, accurate outputs based on your proprietary data.</p><h4><strong>2. AI Agents</strong></h4><p>Instead of just responding with information, AI agents can now perform tasks on your behalf &#8212; often across multiple steps.</p><p>For example, an AI agent can read an email, check someone&#8217;s calendar, schedule a meeting, send a confirmation &#8212; all automatically.</p><p>They can also automate internal workflows like collecting customer feedback, summarizing support tickets, or triggering actions in your tools.</p><p>Think of them as a digital team member who is getting smarter and more capable every day.</p><h4><strong>3. Low-Code And No-Code AI Platforms</strong></h4><p>Platforms like <a href="https://relevanceai.com/">Relevance AI</a>, <a href="https://bubble.io/">Bubble</a>, and others are enabling non-technical teams to quickly prototype and deploy AI-powered solutions &#8212; without relying heavily on engineering.</p><p>This is especially valuable for startups or teams testing a new idea, where speed matters and resources are limited. These tools allow teams to test ideas, validate use cases, and iterate fast &#8212; <em>before</em> investing in fully engineered implementations.</p><p>By significantly lowering the barrier to entry, they empower product, operations, and business teams to explore AI opportunities hands-on, without waiting on dedicated dev time.</p><h3>How Can AI Help Your Organisation</h3><p>So what does that mean for your organization? Two major opportunities:</p><h4><strong>1. Enhance the Flows You Already Have</strong></h4><p>AI can be a huge productivity boost. Without innovating something new, it can boost the productivity of the workflows you already have.</p><p>Think about use cases like: Designing UX prototypes, drafting product requirements, conducting in-depth research, writing and reviewing code, automating QA, analyzing user feedback and product usage, triaging internal tickets, summarizing meetings, accelerating marketing workflows, etc.</p><p>The list keeps growing &#8212; and the impact is clear: your teams get more done, in less time, with higher consistency.</p><h4><strong>2. Improve Your Product Experience</strong></h4><p>AI can also enhance your product&#8217;s core functionality &#8212; making it smarter, faster, and more intuitive for your users.</p><p>Consider features like:</p><ul><li><p>An onboarding assistant that adapts in real time to user behavior</p></li><li><p>Automated insights generated from reports or dashboards</p></li><li><p>Conversational interfaces that go beyond traditional chatbots</p></li></ul><p>These capabilities can directly improve your customer experience &#8212; and help your product stand out in an increasingly competitive market.</p><p>But here&#8217;s the reality:</p><p><strong>The bar for differentiation is rising.</strong></p><p>The technical barriers to building AI-powered features are now lower than ever. Which means if you're not evolving your product, there&#8217;s a good chance your competitors are &#8212; or soon will be.</p><h2>How to Start Implementing AI Across Your Organization</h2><p>The AI is starting to reshape how companies operate, build, and compete. </p><p>For many organizations, the challenge isn't whether to engage with AI, but how to do it responsibly, strategically, and without chaos.</p><p>Here&#8217;s a pragmatic, phased approach to help you get started.</p><h3>1. Build Internal Awareness and Literacy Early</h3><p>Start by building a shared, realistic understanding of what AI is &#8212; and what it isn&#8217;t. While many team members are likely following the latest trends, some may lack context on how AI can be applied in their work or where its current limitations lie.</p><p>Begin these conversations early &#8212; even before leadership has set formal policies.</p><p>Why?</p><ul><li><p><strong>AI is already in use.</strong> Many teams are experimenting with tools like ChatGPT. Without guidance, this can lead to untracked, non-compliant usage.</p></li><li><p><strong>Bottom-up insights matter.</strong> Early engagement surfaces real use cases and concerns, helping shape more practical policies.</p></li><li><p><strong>It builds trust and buy-in.</strong> Starting early signals that leadership is enabling innovation, not just managing risk.</p></li></ul><h3>2. Set Guardrails for Safe Experimentation</h3><p>Experimentation needs to be fast &#8212; but also safe. </p><p>To encourage AI exploration without creating unnecessary risk, you need some smart boundaries.</p><p>One practice is to separate lightweight experimentation from full-scale adoption. </p><p>For early testing, allow teams to explore tools under clear conditions, e.g. no sensitive data, no production integrations, and only using free or personal accounts. Ask teams to document what they&#8217;re testing and what they hope to learn. This enables quick experimentation without waiting for lengthy approvals.</p><p>If a tool shows real potential, it can then move into a more formal review process &#8212; including security, privacy, and legal assessments, as well as evaluation of vendor reliability and integration needs.</p><h3>3. Identify a Cross-Functional AI Team</h3><p>AI cuts across product, design, data, legal, and operations. </p><p>That&#8217;s why your early AI efforts are best led by a cross-functional team with the mandate to explore opportunities from multiple angles.</p><p>Their role is to identify where AI could create real value, run lightweight and low-risk experiments, and validate feasibility before making larger investments. Think of this group as a small, agile strike team &#8212; focused on learning, not perfection.</p><p>With the right guardrails in place, they can move quickly, helping the organization build momentum while staying aligned with broader strategic goals.</p><h3>4. Encourage Teams to Explore Use Cases &#8212; With Strategic Focus</h3><p>Once there&#8217;s a shared understanding of AI and basic guardrails are in place, you can encourage teams across the business to start identifying opportunities in their own domains.</p><p>One simple way to guide this exploration is by asking two key questions:</p><ul><li><p><em>How can AI help us increase productivity in our daily work?</em></p></li><li><p><em>How might AI improve our product or user experience?</em></p></li></ul><p>Encourage teams to evaluate their ideas through a strategic lens &#8212; considering feasibility, potential business value, risk, and how well each idea aligns with the company&#8217;s overall goals.</p><p>The most promising initiatives should then flow into existing planning and prioritization cycles. This ensures that AI experimentation evolves beyond isolated pilots and becomes a focused driver of meaningful business outcomes.</p><h3>5. Start Small &#8212; and Start Internally</h3><p>Your first AI success doesn&#8217;t need to be customer-facing. </p><p>In fact, internal use cases are often the most practical and effective starting point. They carry less risk, are easier to control, and can demonstrate clear value quickly.</p><p>Focus on areas where tasks are repetitive, time-consuming, or a known source of frustration. </p><p>Prioritize use cases that are easy to measure and quick to test &#8212; such as automating internal reporting, summarizing meetings, drafting responses to support tickets, or improving access to internal knowledge.</p><p>These behind-the-scenes wins can build momentum, reduce resistance, and create the confidence needed to take on more ambitious AI initiatives.</p><h3>6. Put Coordination Structures in Place</h3><p>As AI experimentation gains traction, it&#8217;s important to evolve from isolated testing to coordinated implementation. Without structure, you risk tool complexity, duplicated efforts, and compliance issues.</p><p>You can consider introducing lightweight coordination mechanisms that maintain flexibility while ensuring alignment. For example, a central inventory to track the tools, use cases, and pilots happening across the organization. Establishing and maintaining clear usage guidelines that define how AI should be applied is also essential.</p><p>With the right structures in place, you can keep innovation moving &#8212; without compromising visibility, control, or alignment.</p><h3>7. Measure What Matters: Business Impact</h3><p>As pilots begin to deliver results, it&#8217;s critical to focus on what truly counts: measurable business outcomes.</p><p>Focus on measuring tangible outcomes: time saved by internal teams, cost reductions, process efficiencies, improved user experiences, or faster, better-informed decision-making.</p><p>These are the results that resonate with executive stakeholders, validate the value of your AI initiatives, and build the momentum needed to expand and scale across the organization.</p><h2>Your Next Step</h2><p>AI is a present opportunity. </p><p>The key is to move thoughtfully, but not slowly. If you combine curiosity with structure, you will be best positioned to leverage AI&#8217;s potential &#8212; and stay ahead in the next wave of innovation.</p><div><hr></div><p><em>What&#8217;s your organization&#8217;s first step? If you&#8217;re exploring AI &#8212; or unsure where to start &#8212; I&#8217;d love to hear how you&#8217;re approaching it.</em></p><p></p><p><em><strong>Enjoyed this Article? </strong></em></p><p><em>Subscribe to Lean Product Growth for regular updates on building and scaling a successful product organization. Insights, strategies, and actionable tips&#8212;delivered straight to your inbox.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.enlighten.services/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.enlighten.services/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item></channel></rss>