<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Fixme]]></title><description><![CDATA[A space for debugging engineering and product work in tech]]></description><link>https://www.fixme.media</link><generator>Substack</generator><lastBuildDate>Sat, 09 May 2026 10:59:50 GMT</lastBuildDate><atom:link href="https://www.fixme.media/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Adam Berry]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[fixme@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[fixme@substack.com]]></itunes:email><itunes:name><![CDATA[Adam Berry]]></itunes:name></itunes:owner><itunes:author><![CDATA[Adam Berry]]></itunes:author><googleplay:owner><![CDATA[fixme@substack.com]]></googleplay:owner><googleplay:email><![CDATA[fixme@substack.com]]></googleplay:email><googleplay:author><![CDATA[Adam Berry]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Psychological Safety Just Got More Important]]></title><description><![CDATA[We ship the org. Let's make sure we're shipping one that can handle what's coming]]></description><link>https://www.fixme.media/p/psychological-safety-just-got-more</link><guid isPermaLink="false">https://www.fixme.media/p/psychological-safety-just-got-more</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Thu, 07 May 2026 16:30:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!heFl!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1c5552f-bf5c-4dde-80f0-6f4a8caa68c8_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve been having a lot of conversations lately about transparency. About feedback. How to give it, how to receive it, how to make it useful instead of performative. And asking for the unvarnished version of how things are going as I step into a larger portfolio.</p><p>Every time I ask for feedback, I catch myself waiting. Waiting until I have a clearer picture. Until the thinking is more developed. Until the idea is polished enough to be worth sharing.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Fixme! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A recurring idea has emerged from these conversations: we&#8217;re running strategy at the speed of bug fixes now. The things that used to take months (shaping a direction, socialising it, building alignment) are now happening in days. Sometimes hours. The AI transition has compressed the planning horizon in ways we haven&#8217;t fully reckoned with. And waiting until thinking is polished before sharing it doesn&#8217;t just slow you down, it makes the thinking obsolete by the time it circulates.</p><p>So I&#8217;ve been trying something different. Sharing the half-formed things. Asking for feedback before highly confident in the answers. Running small experiments to see what breaks before committing to a direction.</p><p>It&#8217;s scary. Not terrifying, more like the uncomfortable but clarifying kind of scary. And the more I&#8217;ve sat with it, the more I&#8217;ve come to believe that the transparency piece isn&#8217;t just a personal preference or a culture aspiration. It&#8217;s a structural necessity for the AI transition. It really feels like with AI as a multiplier, the safety necessary here is also a multiplier.</p><h2><strong>The Foundation</strong></h2><p>Back in the late 00s, Google ran a study that became known as <a href="https://rework.withgoogle.com/blog/five-keys-to-a-successful-google-team/">Project Aristotle</a>. They set out to understand what made teams effective. They studied group norms, team composition, individual personalities, everything they could measure. The finding that emerged, again and again, was <em>psychological safety</em>: the shared belief that the team is safe for interpersonal risk-taking. Speaking up with a half-formed idea. Admitting you don&#8217;t know. Challenging a decision without fear of being punished for it.</p><p>Safety wasn&#8217;t just one factor among many. It was the multiplier. Teams with high psychological safety consistently outperformed teams with lower safety, even when controlling for everything else.</p><p>AI is a multiplier too. We&#8217;ve talked a lot about AI as a force multiplier for individual productivity: 2x, 5x, whatever the numbers are this quarter. But the multiplier framing cuts both ways.</p><p>When psychological safety is low and you introduce AI, you don&#8217;t get &#8220;low safety + AI = normal outcomes.&#8221; You get compounding failure modes. AI amplifies the dynamics that are already there. In an environment where people don&#8217;t speak up, where mistakes are punished, where challenging ideas feels risky, AI makes those environments faster and worse. The surface area for errors increases. The velocity of bad decisions accelerates. The institutional memory of what went wrong gets overwritten before anyone can learn from it.</p><p>When psychological safety is high and you introduce AI, you get the inverse. People share the half-formed things. Experiments generate evidence. Feedback loops tighten. AI accelerates the good dynamics instead of the bad ones.</p><p>This is the metaphor worth sitting with: <strong>AI &#215; Psychological Safety</strong>. Or something like that. The math isn&#8217;t the point. The point is that the base matters more now, because whatever&#8217;s there gets amplified. Invest in safety, and AI works for you. Don&#8217;t, and it works against you.</p><h2><strong>The Information Architecture Problem</strong></h2><p>If thinking in the open is the answer, what&#8217;s the system for it?</p><p>Here&#8217;s where it gets practical. Open thinking without structure is noise. You get half-baked ideas scattered across Slack threads that nobody can find later. You get the same conversations happening in seventeen different places. You get the information equivalent of a codebase with no architecture, technically possible to navigate, but expensive and error-prone.</p><p>The challenge is <em>audience sizing</em>: what&#8217;s the right audience for thinking at different stages?</p><p>Early-stage thinking, the genuinely half-formed stuff, needs a small, trusted circle. Not because it&#8217;s secret, but because the signal-to-noise ratio is low. You&#8217;re exploring, not presenting. The feedback you need at this stage is &#8220;does this direction have legs?&#8221; not &#8220;here&#8217;s a detailed critique of your assumptions.&#8221;</p><p>Mid-stage thinking, where you&#8217;ve run a small experiment and have evidence, opens up. You can bring in a broader set of peers. The signal is stronger because you&#8217;ve generated some. This is where the thinking starts to become useful to others, not just yourself.</p><p>Late-stage thinking, validated directions, things you&#8217;re ready to act on, belongs in the open, broadly. This is where org-wide transparency pays off. When people can see the evidence behind decisions, they can contribute context, flag risks, and align their own work.</p><p>The failure mode at most organizations is skipping stages. Putting early-stage thinking in broad forums, where the lack of evidence makes it look undercooked. Or keeping mid-stage thinking in small circles when the experiment&#8217;s results could inform other teams. The result is either noise or missed opportunity.</p><p>The other failure mode is feedback loops that are too slow. Thinking in the open only works if the people you&#8217;re sharing with can engage fast enough to matter. If it takes three weeks to get a response, the landscape has shifted. This is where the speed problem I mentioned earlier bites hardest. The AI transition does not just compress the time to produce ideas, it compresses the time to validate them. Organizations that can give fast, high-quality feedback on open thinking will outlearn those that can&#8217;t.</p><h2><strong>Conway&#8217;s Law Has Something to Say</strong></h2><p>We talk about Conway&#8217;s Law mostly in terms of architecture: you ship the org structure into your systems. Teams that don&#8217;t communicate produce systems that don&#8217;t integrate. The teams that do communicate produce coherent architectures.</p><p>But Conway&#8217;s Law applies to information flow too. The communication structures of an organization shape what information can flow through it. Team boundaries, ownership models, reporting lines, these are all filters. They determine what gets shared, with whom, at what latency, with what fidelity.</p><p>If you want thinking to happen in the open, the org architecture has to support it. Not just tolerate it, actively enable it. That means:</p><ul><li><p><strong>Boundaries that allow cross-pollination</strong>: Teams that only communicate through official channels have information that moves at the speed of process, not conversation.</p></li><li><p><strong>Ownership that doesn&#8217;t mean isolation</strong>: &#8220;Owning&#8221; a service or domain shouldn&#8217;t mean &#8220;you&#8217;re the only one who can have opinions about it.&#8221;</p></li><li><p><strong>Leadership that models it</strong>: If the senior folks polish before presenting, they&#8217;re signaling that early-stage thinking isn&#8217;t welcome. The culture follows the modeling.</p></li></ul><p>This is the architectural work that can&#8217;t be skipped. You can talk about psychological safety all day. You can encourage thinking in the open. But if the structure around it constrains the flow, the culture won&#8217;t overcome it. You have to design for it.</p><h2><strong>The Multiplier Math</strong></h2><p>Let&#8217;s pull the threads together.</p><p><strong>Outcomes = Psychological Safety &#215; Information Architecture x AI Multiplier</strong></p><p>When the base factors are strong, the exponent works for you. AI amplifies good judgment, tight feedback loops, and evidence-based iteration. When the base factors are weak, AI amplifies silence, slow feedback, and expensive mistakes.</p><p>The implication isn&#8217;t just &#8220;build a safe culture.&#8221; It&#8217;s &#8220;the investment in safety is now more valuable than it was before, because the multiplier is higher.&#8221; This isn&#8217;t soft stuff to optimize when you have time. This is infrastructure for the AI transition.</p><p>The same is true for information architecture. The cost of bad information flow (fragmented knowledge, slow feedback, duplicated conversations) is amplified when AI is in the loop. AI tools work better when they&#8217;re working with coherent, accessible information. They&#8217;re worse when they&#8217;re working with tribal knowledge, scattered context, and undocumented decisions.</p><h2><strong>The Ask</strong></h2><p>This is a moment for leaders to build the base factors, not just adopt the multiplier.</p><p>Safety without structure is vibes. Structure without safety is bureaucracy. You need both.</p><p>And you need both now. The organizations that figure out how to do thinking in the open (transparent, experimental, fast-feedback) will outlearn those that don&#8217;t. AI makes that possible in a way it wasn&#8217;t before. The cost of producing and sharing ideas has dropped. The question is whether the org architecture can absorb them at the same rate.</p><p>We have a chance to build organizations where AI compounds good work. Where the multiplier works for us instead of against us. Where the speed of strategy matches the speed of bug fixes, and we still get it right.</p><p>The scary part (thinking out loud, sharing the half-formed things, asking for feedback before you&#8217;ve earned the right to have answers) is also the way in.</p><p>We can do hard things. Let&#8217;s not skip the hard thing because it&#8217;s hard.</p><div><hr></div><p>What does thinking in the open look like in your organization? What are the structures that enable it, or constrain it?</p><div><hr></div><p><strong>Related reading:</strong></p><ul><li><p><a href="https://rework.withgoogle.com/blog/five-keys-to-a-successful-google-team/">Project Aristotle and psychological safety</a> - Google&#8217;s research on what makes teams effective</p></li><li><p><em>An Elegant Puzzle</em> by Will Larson - on org design for enablement</p></li><li><p><a href="https://en.wikipedia.org/wiki/Conway%27s_law">Conway&#8217;s Law</a> - on the relationship between org structure and system design</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Fixme! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Four Vs of Code]]></title><description><![CDATA[Code isn't solved, but it will be fundamentally different from now on]]></description><link>https://www.fixme.media/p/the-four-vs-of-code</link><guid isPermaLink="false">https://www.fixme.media/p/the-four-vs-of-code</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Wed, 18 Mar 2026 04:37:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!heFl!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1c5552f-bf5c-4dde-80f0-6f4a8caa68c8_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Unwinnable Race</strong></h2><p>I recently gave a talk framing the AI transition as the industrialization of software engineering. The parallel held&#8212;the audience nodded along as I connected AI-assisted engineering to the cloud transition as an earlier shift in the economics of production.</p><p>But something nagged at me afterward. The cloud transition changed <em>where</em> and <em>how</em> we ran software, but it didn&#8217;t change the fundamental nature of the thing being produced. It was a change in the plumbing, not the water.</p><p>Then, last week, I found myself in yet another conversation with a platform lead about &#8220;optimizing code review.&#8221; They were looking for the right combination of assignment logic, reminder bots, and &#8220;reviewer fatigue&#8221; metrics to make their process &#8220;work really well.&#8221;</p><p>As we talked, I realized they were trying to win an unwinnable race. They were trying to apply human-rate friction to machine-rate volume.</p><p>That&#8217;s when it clicked. The cloud wasn&#8217;t the right parallel. The <em>data wave</em> was.</p><p>Cloud, mobile, and big data arrived together and broke our existing data infrastructure. The &#8220;4 Vs of Data&#8221; (Volume, Velocity, Variety, Veracity) gave us a language for why. We aren&#8217;t in the first industrialization of software engineering. We&#8217;re in the second. The data wave was the first.</p><p>And just like before, the axis scales of what we produce are shifting, and that shift is what breaks the infrastructure.</p><h2><strong>The &#8220;Solved&#8221; Fallacy</strong></h2><p>You&#8217;ve probably seen the headlines: &#8220;Coding is solved.&#8221; Boris Cherny, head of Claude Code, <a href="https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens">recently said</a> exactly that. The token sellers love this framing because it sells, well, tokens. It&#8217;s the Hadoop moment all over again.</p><p>In 2012, people claimed data was &#8220;solved&#8221; because we had Hadoop. But Hadoop only solved the obvious first problem: storing and processing at scale. It didn&#8217;t solve data quality, lineage, or governance. In fact, Hadoop behaved more like a problem scaler than a creator. These issues existed before, but the presence of those problems was scaled out with Hadoop to a point where they could no longer be ignored.</p><p>AI producing code is the Hadoop equivalent. It&#8217;s the enabling condition, not the solution. The forces that AI-rate code production creates are the <em>actual</em> problem. And if we want to survive this wave without drowning in a swamp of &#8220;good enough&#8221; boilerplate, we need to understand the 4 Vs of Code.</p><h2><strong>The Four Forces</strong></h2><p>The same forces that broke data infrastructure are now breaking code infrastructure.</p><p><strong>1. Volume: The Infinite Monkeys.</strong> We&#8217;re moving from a world where code was a scarce resource written by expensive humans to one where it&#8217;s a commodity. Engineers using AI effectively are reporting 2-5x productivity uplifts. That&#8217;s not 2-5x more typing; it&#8217;s 2-5x more <em>functioning software</em> moving through our systems. More branches, more PRs, more surface area to understand and maintain.</p><p><strong>2. Velocity: Continuous Generation.</strong> The gap between idea and running code is shrinking. We&#8217;re moving past Continuous Integration into Continuous Generation. Cycle times are compressing, and the rhythm of engineering work is accelerating past the point where a human can &#8220;keep up.&#8221;</p><p><strong>3. Variety: Polyglot Sprawl and Cheap Retries.</strong> When you can produce code quickly in unfamiliar territory, you stop routing around your skill gaps. This leads to &#8220;variety&#8221;&#8212;engineers working across more languages and frameworks than they&#8217;d ever attempt solo. It also means we can generate multiple implementations or &#8220;tries&#8221; at a problem for nearly the same negligible cost, leading to a proliferation of approaches to the same issue. It sounds like a superpower, but it stresses ownership models. Who owns the Python script inside the Go service that was generated by someone who doesn&#8217;t actually know Python? Or the three alternative implementations an agent generated while trying to find the &#8220;best&#8221; one?</p><p><strong>4. Veracity: The Hallucination of Correctness.</strong> Generated code has a trust problem. AI produces plausible output with variable correctness. It <em>looks</em> right, it&#8217;s well-structured, and it might even pass initial tests. But it can be subtly broken or insecure in ways that are hard for a human to spot at scale.</p><h2><strong>The Squeeze: Volume &#215; Velocity</strong></h2><p>The Vs individually are manageable. It&#8217;s the interactions where the wheels fall off.</p><p>Take <strong>Volume &#215; Velocity</strong>. This is the &#8220;unwinnable race&#8221; I was discussing with that platform lead. Every organization running AI-assisted engineering at scale has the same complaint: the review queue.</p><p>More code is arriving faster than humans can read it. Our current response is to ask humans to read faster or to &#8220;be more diligent.&#8221; The arguments that this approach is <a href="https://www.latent.space/p/reviews-dead">already breaking</a> are getting louder. This is the &#8220;bigger-Postgres-instance&#8221; response. It works for a while, but it doesn&#8217;t scale.</p><p>When an agent produces a complete feature implementation in minutes, requiring a human to read every line before merge isn&#8217;t rigor; it&#8217;s a scaling constraint wearing the costume of rigor. We don&#8217;t read every line of compiler output or generated protobuf code because we have verification infrastructure that makes line-by-line review unnecessary.</p><p>Agent-produced code doesn&#8217;t have that infrastructure yet.</p><h2><strong>The &#8220;Data Lake&#8221; Moment for Code</strong></h2><p>This is the &#8220;Data Lake&#8221; moment for our craft.</p><p>The data lake didn&#8217;t emerge because someone thought it would be fun to store everything in one place. It emerged because volume and variety made &#8220;schema-first ingestion&#8221; (the data equivalent of code review) untenable.</p><p>The infrastructure inverted: <strong>store first, schema on read.</strong></p><p>Code is heading for a similar inversion: <strong>merge first, verify continuously.</strong></p><p>Our platforms&#8212;Git, GitHub, our CI pipelines&#8212;were built on the assumptions of the data warehouse: human-rate production, sequential process, and specialist operators. Those assumptions are currently under a level of pressure they weren&#8217;t designed to handle.</p><h2><strong>The Infrastructure Demand</strong></h2><p>We&#8217;re in the &#8220;patching&#8221; phase right now. We&#8217;re adding better reminder bots and longer review times. But the real shift is going to require new categories of infrastructure:</p><ul><li><p><strong>Verification Infrastructure:</strong> Continuous, automated, and probabilistic. We need systems that can verify the veracity of code without requiring a human to &#8220;look at the diff.&#8221;</p></li><li><p><strong>Context Infrastructure:</strong> Making our platforms legible to AI. If your platform confuses humans, it <em>really</em> confuses AI. We need machine-readable context (API docs, schema registries) that turns inference into fact-based operation.</p></li><li><p><strong>Absorption Infrastructure:</strong> Systems that can handle higher code volume without proportional human review cost. Smarter merge systems and automated quality gates are no longer &#8220;nice to haves.&#8221;</p></li></ul><h2><strong>Stop Typing, Start Stewarding</strong></h2><p>The ground is shifting. It&#8217;s uncomfortable, it&#8217;s noisy, and it feels like we&#8217;re losing our bearings. But we&#8217;ve navigated a wave like this before.</p><p>We need to stop treating code as a precious artifact and start treating it like the industrial output it&#8217;s becoming. This means shifting our focus from writing to reading, from owning to stewarding, and from gatekeeping to verifying.</p><p>If the monkeys are going to be this fast, we&#8217;d better be great stewards.</p><p>The question isn&#8217;t whether the infrastructure will be re-engineered. The question is whether we&#8217;ll build ahead of the curve or get dragged past it.</p><p>For platform and infrastructure engineering, this is going to be a very interesting few years.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Fixme! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Longitude]]></title><description><![CDATA[We can get our bearings back]]></description><link>https://www.fixme.media/p/ai-longitude</link><guid isPermaLink="false">https://www.fixme.media/p/ai-longitude</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Fri, 22 Aug 2025 15:01:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!heFl!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1c5552f-bf5c-4dde-80f0-6f4a8caa68c8_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We've lost our bearings. As software engineers and an industry, we're adrift in the AI disruption. But we can regain our sense of direction by looking to the lessons of our own professional history and the wider history of human technology.</p><p>I've been thinking a lot about the story of how we solved the problem of <strong>longitude</strong> at sea. It was a massive navigational challenge, that added to the existing practice of sailing, that was ultimately solved by a seemingly small piece of technology: an accurate clock. That story reminds us that truly transformational shifts in how humans operate, like being able to sail across the oceans, can come down to leaps in innovation. </p><h3>The Problem with Extremes and Our Anxiety</h3><p>A lot of the conversation around AI engineering tools sits at the extremes, but the most likely outcome is, as always, somewhere in the middle. By flapping around about best- and worst-case scenarios, we're missing a lot of what's important.</p><p>I've been in conversations where the same people express accidentally contradictory concerns. On one hand, they argue that AI assistants don't really speed us up because coding was never the bottleneck. They claim the tools aren't good enough anyway, so it doesn't really matter. Then, in the same breath, they worry about how new graduates will learn to code if AI is so good that they never have to do it themselves. </p><p>These viewpoints show that we're a bit at sea. They highlight the normal anxieties that come with major change. The fear of job displacement for experienced engineers&#8212;if the tool isn't good enough, it's a dismissive stance that says, "It won't affect me." Also fear of the unknown for the next generation&#8212;what happens to the junior talent pipeline if the fundamental skills change? Both are valid human reactions, but they can't both be true about the technology.</p><div><hr></div><h3>A Moral Panic About Thinking</h3><p>It's also useful to remember that humans have a history of moral panic whenever a new technology affects how we think. Think about the introduction of <strong>books</strong>. Many people were concerned that because narrators/authors no longer had to construct arguments or repeat long narratives from their heads, and so the thinking went, books would negatively impact their ability to reason and think.</p><p>Do these tools change the way our brains work? Yes. But that was also true of books, and of the internet, and of search engines. Is it for the worse? I don't know, maybe. But in the long run, these shifts have raised the ceiling of what we can achieve.</p><p>This happens because the individuals and organizations that <strong>learn effectively</strong> are the ones that win. We've been too focused on execution and efficiency as the sole drivers of success lately. But organizations that figure out how to help their people learn, pick up new skills, and rewire how they work will be able to get products to market more quickly. Fundamentally, that's what drives success. We've known this for a while&#8212;it's a core tenet of research like <em>The Fifth Discipline</em>.</p><div><hr></div><h3>Learning from Our Past, Navigating Our Future</h3><p>We can also look to our own profession for examples. Think about the introduction of <strong>compiled languages</strong> over directly writing assembly code. The criticism then was nearly identical to what we hear today: "The compiler will never be able to write this as well as I can. It's never going to be as efficient as I can make it. It's never going to be as correct as I can make it."</p><p>It's safe to say that history has declared a clear winner. We now have a towering stack of languages that sit on top of assembly. AI tools that write code are just another natural evolution of this trend toward higher levels of abstraction.</p><p>So, how do we navigate this new era? How do we, as experienced practitioners, bring the next wave of engineers along with us? How do we help our organizations adapt?</p><p>The <strong>principles</strong> are the same. We still need to pair, mentor, and give people projects that help them grow. But the <strong>mechanics</strong> are different. New engineers will be swinging AI tools more natively, more instinctively, and earlier in the process. We will have to learn along with them, and that's the uncomfortable part. But we've been through this before.</p><p>This disruption is large and it's certainly unmooring. But because we've been through this before, we do know how to navigate it. </p><p>It's time to go build our clock.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/p/ai-longitude?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.fixme.media/p/ai-longitude?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.fixme.media/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI Induced Incidents]]></title><description><![CDATA[Or why misfire hyperbole distracts, and what we should focus on]]></description><link>https://www.fixme.media/p/ai-induced-incidents</link><guid isPermaLink="false">https://www.fixme.media/p/ai-induced-incidents</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Fri, 08 Aug 2025 14:57:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!heFl!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1c5552f-bf5c-4dde-80f0-6f4a8caa68c8_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TsLQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TsLQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif 424w, https://substackcdn.com/image/fetch/$s_!TsLQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif 848w, https://substackcdn.com/image/fetch/$s_!TsLQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif 1272w, https://substackcdn.com/image/fetch/$s_!TsLQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TsLQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif" width="400" height="220" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:220,&quot;width&quot;:400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1467805,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/gif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fixme.media/i/170177555?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TsLQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif 424w, https://substackcdn.com/image/fetch/$s_!TsLQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif 848w, https://substackcdn.com/image/fetch/$s_!TsLQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif 1272w, https://substackcdn.com/image/fetch/$s_!TsLQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbe01b8f-5510-4578-8bf8-6f6472530f59_400x220.gif 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>Surely at this point everyone has heard the story where <a href="https://www.theregister.com/2025/07/21/replit_saastr_vibe_coding_incident/">Replit deleted a production database</a>, overriding its rules and instructions along the way to do so, and another instance with the <a href="https://mashable.com/article/google-gemini-deletes-users-code?test_uuid=003aGE6xTMbhuvdzpnH5X4Q&amp;test_variant=b">Gemini CLI</a> deleting a load of code followed right behind. These are just examples of the coverage that because AI makes mistakes its dangerous and shouldn&#8217;t be used. Definitely pay close attention to failures, and then dig in and lean back on our engineering chops, as really the AI tooling is incidental in these examples.</p><p>To a decent degree, we are just falling for our human vulnerability of being distracted by significant negative events, we&#8217;re working against evolution on this, and so I get the reaction. What I&#8217;m really really looking for is our rational sides to take over and work the problem.</p><div class="pullquote"><p>If your delegate goes off the rails and does a bunch of destructive things, you have delegated poorly, whether you understood that or not</p></div><p>There are two things that stand out to me as lessons to take forward. Solid engineering practices remain incredibly important, and what is really going to matter is the rate of failures being introduced into our systems. </p><p>Guardrails and incident management practices are an obvious set of things key to reducing and managing the risk and impact of bad things happening. Note I said reduce, not eliminate. This is has always been core to this school of thought, zero risk if even achievable is likely cost prohibitive, and so we approach this from managing the risk and what guardrails we need in place to do that.</p><p>Human engineers have been happily killing production systems for as long as we&#8217;ve had them (so so many examples come to mind, both personal and legend), it would have been a really weird response if we just stopped writing code as the only way to prevent those issues. Over time we built up incident response skills, and then incident management processes that fed into making guardrail improvements, this is how we reduce impact and risk.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/p/ai-induced-incidents?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.fixme.media/p/ai-induced-incidents?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>The part that came later was basically the codifying of the DORA 4 metrics as a core view of tracking the throughput and quality of the changes we make. There are certainly other metric sets you can build as your feedback loop, but you definitely should have <em>something</em>.</p><p>What you really must know is are you shifting either the rate, or your recovery time from, failures. This is true whether or not you are in the midst of an AI push. We acknowledge that some early work has shown that increases in throughput and change size (from using AI tools) has negatively impacted change failure rates. Thats actually just a predictable outcome from the model of delivery here, but it might be ok. What you really want to think about is if the shift in your risk profile is ok for the increased value you are getting from shipping more, thats a tradeoff decision for you and your team.</p><p>Finally, as a human engineer do not blame the AI tool, and don&#8217;t tolerate anyone else blaming either. They are tools, you should learn and know how to wield them risk and all. But it&#8217;s more important than that. If we as engineers will remain involved (and so employed) then the tools are our delegate, this is how we should be approaching this in all ways and this is a concept with a long legal basis, treat the choices as such.</p><p>So if your delegate goes off the rails and does a bunch of destructive things, you have delegated poorly whether you understood that or not, you still blew that play. That is your signal to dig in, learn and get some new guardrails in place. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.fixme.media/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Bottlenecks and AI]]></title><description><![CDATA[Responding to the code is not the bottleneck]]></description><link>https://www.fixme.media/p/bottlenecks-and-ai</link><guid isPermaLink="false">https://www.fixme.media/p/bottlenecks-and-ai</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Mon, 04 Aug 2025 18:40:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!heFl!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1c5552f-bf5c-4dde-80f0-6f4a8caa68c8_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There have been several pieces with the premise that writing code was never the bottleneck for how quickly product and engineering work gets done, and that even if AI tools speed up the coding part, the actual bottlenecks will still remain. This take is missing enough to be wrong, and skips over where there is fundamental improvement available to product and engineering orgs.</p><p>First up, the actual act of writing code may not be <em>the</em> bottleneck, but that doesn&#8217;t mean that it isn&#8217;t <em>a</em> bottleneck. Bottlenecks are simply places in your production process where work to be done piles up, so hands up if you have an empty backlog. Bueller? Bueller? </p><div class="pullquote"><p>AI native development reduces meetings</p></div><p>Now I think there is a fair take that the act of implementing for an individual engineer isn&#8217;t uniformly faster yet. Thats a combination of tooling, reskilling and cultural issues at play, so sure, we aren&#8217;t transformationally faster (yet) at this phase; but there definitely are teams that are racing up this curve.</p><p>What we also have to remember is that even if the phase doesn&#8217;t get way faster, the AI native development can also change the nature of how change is offered. Spinning up alternative implementations at the same time carries close to the same engineering time as one implementation when using agentic tools. Instead of big alignment discussions based on opinions or theories up front, we can just try things and then select winners. </p><p>If your team can&#8217;t decide and align between two similarly scoped libraries for your project? Implement a proof of concept with them both to decide using AI tools. The tests and selections will be reusable across both, so additional cost is small, but comes with much more information for selection, and in all likelihood faster in time. </p><p>Which brings us to the implicit core of the point the pieces are making. Generally speaking we have implementation and coordination activities as engineers, so what we&#8217;re really saying is it doesn&#8217;t matter if implementation speeds up, as coordination is the larger factor in determining speed, which has certainly been true. The miss here is that as the costs of implementation shift, and especially shift the optional costs, the coordination points will both move and reduce as a cost factor in the process of shipping change.<br><br>We&#8217;re seeing this in organizations where the product team vibe code as a discovery and learning step, and even if engineering is no faster afterwards, the overall value stream gets much faster as we get better information input in the first pass, so fewer total passes before something is shipped as done.</p><p>These shifting costs also have effects within a dev team, so many more things move under the bar of &#8216;I can just knock that out with little effort right now&#8217;. There is also this wonderful realization that instead of assembling 4 people for an hour meeting to dig into design choices for a refactoring, you can just spin up your agents, and offer 2 or 3 pull requests as options. Generally its harder to shout down working code, and so even large shifts are much much faster to get through.</p><p></p><p>These organizational rebuilds are sure to be messy during the transition to AI native engineering, but these are things that we&#8217;ve seen before, and of course bottlenecks will appear in different places as processes reform. I&#8217;ve certainly seen enough to be very bullish on the impact this will let me, my team and my org deliver.</p><p>This is a very similar set of forces I think to Kent Beck&#8217;s ideas in an article about how <a href="https://tidyfirst.substack.com/p/slow-deployment-causes-meetings?utm_source=newsletter&amp;utm_medium=email&amp;utm_campaign=loopcafe">slow deploys cause meetings</a>. Lets phrase it at the positive end; AI native development reduces meetings, less time doing coordination means more time on implementation activities, and really I would have thought engineers at large would have been all about that shift in our day to day experience.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Fixme! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI Driven Software Engineering]]></title><description><![CDATA[We're missing the promise, and the risks, for the hype]]></description><link>https://www.fixme.media/p/ai-driven-software-engineering</link><guid isPermaLink="false">https://www.fixme.media/p/ai-driven-software-engineering</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Tue, 08 Apr 2025 16:37:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!heFl!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1c5552f-bf5c-4dde-80f0-6f4a8caa68c8_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At the time of writing the marketplace of AI assisted development tooling is less than three and a half years old. Chat based LLM tooling and now agentic assistants are just exploding into the marketplace, there are 244 tools on the <a href="https://landscape.ainativedev.io/">AI Native Dev&#8217;s landscape</a>, with 34 in the editor category alone.</p><p>It is extremely clear that we are in the midst of a disruption wave crashing around us, we know from history that in the midst of that prediction is a fools game, and yet that isn&#8217;t stopping the hype machine from creating noise mostly about the extremes.</p><div class="pullquote"><p>While the AI wave is large, fast and incredibly uncertain, our way through is learning by experimentation, as it always has been</p></div><p>At one extreme is the &#8216;within 2 years we won&#8217;t even need engineers&#8217;, which has already been part of the discourse for a couple of years. Yes folks, next year will be the year of the linux desktop. Of course this is being used to drive layoffs and fears in some businesses, that is both bad and predictable. While those cuts hurt they are a long way off of the amount that would indicate a true replacement level belief among company executives.</p><p>The other extreme is the skeptic pole, something like &#8216;all this vibe coded crap is gonna burn everything down&#8217;. There is an absolute ton, several books worth really, to say about making things work as products and companies scale, but vibe coding really just looks to empower folks at very early idea experimentation. Any reasonable developer experience and product focused engineer I think should celebrate this, and then offer to roll up the sleeves to make the things work for real.</p><div><hr></div><p>When taken in the large, across the entire industry which produces software (which is everywhere, so thats kind of everyone), we will experience the landing of this wave in multitudes. The extremes will be true to some extent, but the majority experiences I think will drop again into two clusters, and that really comes down to one choice.</p><blockquote><p>do you want to amplify bad habits, or rebuild better habits and processes and amplify those</p></blockquote><p>Fundamentally the idea is that AI based tooling lowers your effort (toil, cognitive load, etc) required for similar lift, this implies it&#8217;s a force multiplier on your habits; your choice is do you want to amplify bad habits, or rebuild better habits and processes and amplify those. Funnily enough DevOps at it&#8217;s core has always been about this question, and yet gave birth to a bunch of products and dogma that promised to be one stop utopia, returning to the core mental processes of this will separate success from failure.</p><p>This choice does apply, as always, to individual engineers and to engineering orgs in the large, the micro and the macro view. The early numbers are not super encouraging, DORA&#8217;s most recent state of DevOps report and a special impact of AI report suggest that we&#8217;ve taken these tools and were just slinging more code, which given our flow models has had the predictable result of harming effectiveness and outcomes.</p><p>To leaders I say get serious about refactoring demands, incentives and the support you offer folks to rebuild their workflows, we are a long way from maturity and stability in the marketplace here, but waiting it out is a bad strategy; you should have been doing this anyway. Because waiting is bad, also get comfortable with a ton of experimentation with fuzzy ROI projections, looking for high probability long term outcome choices is hard enough to be impossible in the current climate.</p><p>For my fellow engineers, for years we&#8217;ve complained about technical debt, so much so that the term has really lost meaning. The core of that has always been roughly, yes, we know better and that we need to write tests and documentation (irony being AI tooling works best when these do exist), but we just don&#8217;t have the time. Well, now those things are easier to do, much easier really, so it&#8217;s definitely the time to start doing it better.</p><p>Part of the reason for the negative reaction to vibe coding is because that is pretty close in truth to the at large truth about software engineering. We&#8217;re already drowning in code that is hard to wrestle with, the idea of massively increasing that problem is rightly terrifying.</p><p>While the AI wave (or hurricane maybe) is large, fast and incredibly uncertain, our way through is learning by experimentation, as it always has been.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/p/ai-driven-software-engineering/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.fixme.media/p/ai-driven-software-engineering/comments"><span>Leave a comment</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Fixme! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Consensus]]></title><description><![CDATA[Do we always need everyone on board?]]></description><link>https://www.fixme.media/p/consensus</link><guid isPermaLink="false">https://www.fixme.media/p/consensus</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Fri, 09 Aug 2024 21:34:47 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/147525595/014549d5d205aee53467f273707a2b9b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this latest FIXME conversation I set down with fellow Infrastructure Engineer Urvashi Reddy. Currently Urvashi is driving infra at Notion, previously at Pinterest.</p><p>We dig into the consensus scale, and talk about theories and stories of bringing people along with change.</p><p>Mentioned during the conversation:</p><ol><li><p><a href="https://linearb.io/dev-interrupted/podcast/why-you-need-to-take-risks-as-an-engineering-leader">Why You Need to Take Risks as an Engineering Leader</a> - Neha Batra on Dev Interrupted podcast</p></li><li><p><a href="https://www.charlesduhigg.com/supercommunicators">Supercommunicators</a> - Charles Duhigg</p></li></ol>]]></content:encoded></item><item><title><![CDATA[Rewrites]]></title><description><![CDATA[Staying away from starting again...]]></description><link>https://www.fixme.media/p/rewrites</link><guid isPermaLink="false">https://www.fixme.media/p/rewrites</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Fri, 10 May 2024 23:41:02 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/144504235/4b1ee04bd5c69b6db81b61a83c69a875.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>The latest FIXME guest is Jack McCloy. Jack works in product infrastructure and design systems, currently at Snowflake.<br><br>We sat down to talk about avoiding rewrites, technical debt, good migrations and a whole chunk of other things, so dig in!</p><p>During the conversation, some materials came up and those are linked below;</p><ol><li><p><a href="https://lethain.com/elegant-puzzle/">An Elegant Puzzle: Systems of Engineering Management - Will Larson</a></p><ol><li><p>And the same authors <a href="https://lethain.com/">blog</a></p></li></ol></li><li><p><a href="https://mcfunley.com/choose-boring-technology">Choose Boring Technology - Dan McKinley</a></p></li><li><p><a href="https://en.wikipedia.org/wiki/Thinking_In_Systems:_A_Primer">Thinking In Systems: A Primer - Donella H. Meadows</a></p></li><li><p><a href="https://en.wikipedia.org/wiki/The_Death_and_Life_of_Great_American_Cities">The Death and Life of Great American Cities - Jane Jacobs</a></p></li><li><p><a href="https://www.goodreads.com/en/book/show/63329951">Paved Paradise: How Parking Explains The World - Henry Grabar</a></p></li></ol>]]></content:encoded></item><item><title><![CDATA[Monorepos]]></title><description><![CDATA[Multiple monos is fine, right?]]></description><link>https://www.fixme.media/p/monorepos</link><guid isPermaLink="false">https://www.fixme.media/p/monorepos</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Wed, 01 May 2024 04:57:02 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/143616315/3fb4abe46a82d5189e1b71e06f38a0da.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Hbd9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Hbd9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Hbd9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Hbd9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Hbd9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Hbd9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg" width="1152" height="640" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:640,&quot;width&quot;:1152,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Hbd9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Hbd9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Hbd9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Hbd9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f330369-2e86-45f3-9bbe-85e6611daf1c_1152x640.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">These are all monorepos????</figcaption></figure></div><p>For this first episode of the Fixme podcast, I&#8217;m joined by Will Howell to chat about monorepos, especially having <em>multiple</em> monorepos&#8230;</p><p>Will is currently an Architect at Okta, continuing his career long push to make engineering platforms that just do the right thing TM.</p>]]></content:encoded></item><item><title><![CDATA[Measuring is not a Magic Wand]]></title><description><![CDATA[Driving change takes just a little bit more work]]></description><link>https://www.fixme.media/p/measuring-is-not-a-magic-wand</link><guid isPermaLink="false">https://www.fixme.media/p/measuring-is-not-a-magic-wand</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Mon, 23 Oct 2023 23:42:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4j9k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>The Meeting</h3><p>Sitting down with your coffee in the morning, your heart sinks as you see a new block on your calendar for the day; &#8216;Developer Velocity Strategy&#8217;. The invite list is a discouraging mix of VPs, senior directors and architects adding up to a solid 20 invitees, the description says little past &#8216;lets discuss how we&#8217;re finally going to fix our velocity issues&#8217;, but hey at least there&#8217;s a doc linked as a pre-read!</p><p>But then the doc is only two thirds of one page, and seems to really just be the statement &#8220;Driving down our PR size will dramatically increase our velocity&#8221;, a cursory explanation as to why small changes are a desirable thing, and a couple of links to some recent engineering performance benchmarking reports coming out of a couple of startups selling engineering org analytics tooling, clicking through you see familiar looking metrics and breakdowns.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Fixme! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>At the appointed hour you join the zoom, wait out the 7 minutes of laggards for the meeting to startup; 6 of you are on the zoom, 7 in the room and who knows where the others who accepted are. The meeting starts over the sound of someone hitting their laptop so hard you wonder if they are digging for oil, you can just make out the word for word read of the pre-read, and then the floor is opened up for discussion.</p><p>You raise your zoom hand to wait your turn, hoping the room will notice. After 10 minutes they haven&#8217;t, so you speak up, &#8220;what evidence do we have that smaller PRs will increase velocity, and what are we going to do to drive down PR size?&#8221;. The response wanders around the credentials of the report source, why these metrics are interesting, why this one metric out of the dozen or so in the report is meaningful, and then tails off, failing to answer either question, so you rephrase slightly &#8216;what tactical efforts are we considering to push developers to make smaller changes?&#8217;. And thats when it happens.</p><p>&#8221;Oh, we don&#8217;t need to drive anything, that which gets measured gets improved&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4j9k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4j9k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512 424w, https://substackcdn.com/image/fetch/$s_!4j9k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512 848w, https://substackcdn.com/image/fetch/$s_!4j9k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512 1272w, https://substackcdn.com/image/fetch/$s_!4j9k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4j9k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512" width="512" height="512" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/67a1865f-533e-4e98-8354-03502a44fc3b_512x512&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:512,&quot;width&quot;:512,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4j9k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512 424w, https://substackcdn.com/image/fetch/$s_!4j9k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512 848w, https://substackcdn.com/image/fetch/$s_!4j9k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512 1272w, https://substackcdn.com/image/fetch/$s_!4j9k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67a1865f-533e-4e98-8354-03502a44fc3b_512x512 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You watch, dumbfounded, as 3/4 of the attendees nod, some smiling. Silence is now the way forward, groupthink has accepted the magic ability of counting to change an outcome. There is another half hour of, well, waffle, but everyone seems very happy that we&#8217;ll be printing gold any minute now and the meeting disbands.</p><h3>The Diagnosis</h3><p>What the meeting has missed is that these reports typically classify teams based on measures, its basically clustering on characteristics that typically at most implies correlation. Causal links between performance level and a practice is sometimes there, but in a prescribed direction, and normally in the direction of the ability of the team to provide business value (revenue or some other real world thing). The internal document though implicitly assumes a causal link between a characteristic and being high performing, and is then saying that by adopting a practice, we will drive to that performance. This is on its own extremely unlikely; not mathematically guaranteed impossible, but unlikely for sure.</p><p>If we had more smaller changes, its likely we would be delivering more things by count, but by value? Maybe, maybe not, its honestly tough to know. Smaller changes seem to be more likely on a team with existing effective delivery (lead time, change failure rate etc), so does small changes improve delivery, or does good delivery enable small changes? In all likelihood, unknown.</p><p>But then there is the widely accepted platitude.</p><p>These broad blanket short statements have a comforting simpleness to them; they sound nice, we want to believe them.</p><p>However, it would take you no time at all to look around the world and find absolutely horrifying examples of things we know the measure or count of, and that only continue to get worse.</p><p>In short, we&#8217;ve confused correlation and causation, and we&#8217;ve dropped into accepting simple sounding things for truths, and we&#8217;re trying to remove context to make a problem more personal and tractable.</p><h3>Progress Despite Surroundings</h3><p>These scenarios are pretty common, and pretty demotivating all around. That said we normally do have the ability in some size of domain to work on change more systematically, and if you&#8217;re employed by a business there is an (at least) implicit responsibility to try to improve the status quo.</p><p>As always, telling folks that they are wrong and/or have misunderstood really fundamental things is not a play with a high success rate, and the skepticism that comes through in the framing of the questions pretty much takes us right up to that door.</p><p>In the room, try to get folks to agree to some targeted experiments, nudge towards a larger context of the metrics not just one piece. Often if we frame this as &#8216;whats two or three things that would be good candidates to measure, work on, and see how things shift?&#8217; we can get to a more fruitful discussion, and also plausibly to there are no good candidates and maybe dropping the whole idea.</p><p>We can also work with things we own make sure we track the full context. Not only can we do this, we should be. Knowing and working on our effectiveness and the collective tooling and processes that contributes that is ultimately our own responsibility.</p><p>At the end of the day, we just want to convert the simplistic statement into &#8216;if we theorize about a characteristic we&#8217;d like to alter, then measure the state, try some changes and measure again, then we can improve&#8217;, also known as basically the scientific method, or software engineering as we also sometimes like to call it.</p><p>However, if you still end stuck in the simplistic circular arguments, and struggle to iterate on even your own sphere, it might be time to start looking for an escape hatch.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Fixme! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Introducing Fixme]]></title><description><![CDATA[Stories and conversations to make tech a better place to live]]></description><link>https://www.fixme.media/p/coming-soon</link><guid isPermaLink="false">https://www.fixme.media/p/coming-soon</guid><dc:creator><![CDATA[Adam Berry]]></dc:creator><pubDate>Thu, 12 Oct 2023 14:50:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!heFl!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1c5552f-bf5c-4dde-80f0-6f4a8caa68c8_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A good FIXME comment in code is a tale of epic woe, a kind of tablet on the mountain warning would be travelers to turn back.</p><p>The idea of this space (a newsletter and podcast to begin) is to work through those tales, find the mistakes at the bottom, hopefully find a better path, and have a good laugh while we do.</p><p>Even with a deep faith in the ability of tech to change whats possible and drive positive change, we must acknowledge that our thought processes and motives matter a great deal in how our products reach and affect the world.</p><p>In short, how matters.</p><p>We&#8217;ll dig for the logical fallacies and other very very human mistakes that individuals and organizations make, we&#8217;ll look at how to survive them, and together we&#8217;ll bring better products and change into the world as a result.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.fixme.media/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.fixme.media/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>