Psychological Safety Just Got More Important
We ship the org. Let's make sure we're shipping one that can handle what's coming
I’ve been having a lot of conversations lately about transparency. About feedback. How to give it, how to receive it, how to make it useful instead of performative. And asking for the unvarnished version of how things are going as I step into a larger portfolio.
Every time I ask for feedback, I catch myself waiting. Waiting until I have a clearer picture. Until the thinking is more developed. Until the idea is polished enough to be worth sharing.
A recurring idea has emerged from these conversations: we’re running strategy at the speed of bug fixes now. The things that used to take months (shaping a direction, socialising it, building alignment) are now happening in days. Sometimes hours. The AI transition has compressed the planning horizon in ways we haven’t fully reckoned with. And waiting until thinking is polished before sharing it doesn’t just slow you down, it makes the thinking obsolete by the time it circulates.
So I’ve been trying something different. Sharing the half-formed things. Asking for feedback before highly confident in the answers. Running small experiments to see what breaks before committing to a direction.
It’s scary. Not terrifying, more like the uncomfortable but clarifying kind of scary. And the more I’ve sat with it, the more I’ve come to believe that the transparency piece isn’t just a personal preference or a culture aspiration. It’s a structural necessity for the AI transition. It really feels like with AI as a multiplier, the safety necessary here is also a multiplier.
The Foundation
Back in the late 00s, Google ran a study that became known as Project Aristotle. They set out to understand what made teams effective. They studied group norms, team composition, individual personalities, everything they could measure. The finding that emerged, again and again, was psychological safety: the shared belief that the team is safe for interpersonal risk-taking. Speaking up with a half-formed idea. Admitting you don’t know. Challenging a decision without fear of being punished for it.
Safety wasn’t just one factor among many. It was the multiplier. Teams with high psychological safety consistently outperformed teams with lower safety, even when controlling for everything else.
AI is a multiplier too. We’ve talked a lot about AI as a force multiplier for individual productivity: 2x, 5x, whatever the numbers are this quarter. But the multiplier framing cuts both ways.
When psychological safety is low and you introduce AI, you don’t get “low safety + AI = normal outcomes.” You get compounding failure modes. AI amplifies the dynamics that are already there. In an environment where people don’t speak up, where mistakes are punished, where challenging ideas feels risky, AI makes those environments faster and worse. The surface area for errors increases. The velocity of bad decisions accelerates. The institutional memory of what went wrong gets overwritten before anyone can learn from it.
When psychological safety is high and you introduce AI, you get the inverse. People share the half-formed things. Experiments generate evidence. Feedback loops tighten. AI accelerates the good dynamics instead of the bad ones.
This is the metaphor worth sitting with: AI × Psychological Safety. Or something like that. The math isn’t the point. The point is that the base matters more now, because whatever’s there gets amplified. Invest in safety, and AI works for you. Don’t, and it works against you.
The Information Architecture Problem
If thinking in the open is the answer, what’s the system for it?
Here’s where it gets practical. Open thinking without structure is noise. You get half-baked ideas scattered across Slack threads that nobody can find later. You get the same conversations happening in seventeen different places. You get the information equivalent of a codebase with no architecture, technically possible to navigate, but expensive and error-prone.
The challenge is audience sizing: what’s the right audience for thinking at different stages?
Early-stage thinking, the genuinely half-formed stuff, needs a small, trusted circle. Not because it’s secret, but because the signal-to-noise ratio is low. You’re exploring, not presenting. The feedback you need at this stage is “does this direction have legs?” not “here’s a detailed critique of your assumptions.”
Mid-stage thinking, where you’ve run a small experiment and have evidence, opens up. You can bring in a broader set of peers. The signal is stronger because you’ve generated some. This is where the thinking starts to become useful to others, not just yourself.
Late-stage thinking, validated directions, things you’re ready to act on, belongs in the open, broadly. This is where org-wide transparency pays off. When people can see the evidence behind decisions, they can contribute context, flag risks, and align their own work.
The failure mode at most organizations is skipping stages. Putting early-stage thinking in broad forums, where the lack of evidence makes it look undercooked. Or keeping mid-stage thinking in small circles when the experiment’s results could inform other teams. The result is either noise or missed opportunity.
The other failure mode is feedback loops that are too slow. Thinking in the open only works if the people you’re sharing with can engage fast enough to matter. If it takes three weeks to get a response, the landscape has shifted. This is where the speed problem I mentioned earlier bites hardest. The AI transition does not just compress the time to produce ideas, it compresses the time to validate them. Organizations that can give fast, high-quality feedback on open thinking will outlearn those that can’t.
Conway’s Law Has Something to Say
We talk about Conway’s Law mostly in terms of architecture: you ship the org structure into your systems. Teams that don’t communicate produce systems that don’t integrate. The teams that do communicate produce coherent architectures.
But Conway’s Law applies to information flow too. The communication structures of an organization shape what information can flow through it. Team boundaries, ownership models, reporting lines, these are all filters. They determine what gets shared, with whom, at what latency, with what fidelity.
If you want thinking to happen in the open, the org architecture has to support it. Not just tolerate it, actively enable it. That means:
Boundaries that allow cross-pollination: Teams that only communicate through official channels have information that moves at the speed of process, not conversation.
Ownership that doesn’t mean isolation: “Owning” a service or domain shouldn’t mean “you’re the only one who can have opinions about it.”
Leadership that models it: If the senior folks polish before presenting, they’re signaling that early-stage thinking isn’t welcome. The culture follows the modeling.
This is the architectural work that can’t be skipped. You can talk about psychological safety all day. You can encourage thinking in the open. But if the structure around it constrains the flow, the culture won’t overcome it. You have to design for it.
The Multiplier Math
Let’s pull the threads together.
Outcomes = Psychological Safety × Information Architecture x AI Multiplier
When the base factors are strong, the exponent works for you. AI amplifies good judgment, tight feedback loops, and evidence-based iteration. When the base factors are weak, AI amplifies silence, slow feedback, and expensive mistakes.
The implication isn’t just “build a safe culture.” It’s “the investment in safety is now more valuable than it was before, because the multiplier is higher.” This isn’t soft stuff to optimize when you have time. This is infrastructure for the AI transition.
The same is true for information architecture. The cost of bad information flow (fragmented knowledge, slow feedback, duplicated conversations) is amplified when AI is in the loop. AI tools work better when they’re working with coherent, accessible information. They’re worse when they’re working with tribal knowledge, scattered context, and undocumented decisions.
The Ask
This is a moment for leaders to build the base factors, not just adopt the multiplier.
Safety without structure is vibes. Structure without safety is bureaucracy. You need both.
And you need both now. The organizations that figure out how to do thinking in the open (transparent, experimental, fast-feedback) will outlearn those that don’t. AI makes that possible in a way it wasn’t before. The cost of producing and sharing ideas has dropped. The question is whether the org architecture can absorb them at the same rate.
We have a chance to build organizations where AI compounds good work. Where the multiplier works for us instead of against us. Where the speed of strategy matches the speed of bug fixes, and we still get it right.
The scary part (thinking out loud, sharing the half-formed things, asking for feedback before you’ve earned the right to have answers) is also the way in.
We can do hard things. Let’s not skip the hard thing because it’s hard.
What does thinking in the open look like in your organization? What are the structures that enable it, or constrain it?
Related reading:
Project Aristotle and psychological safety - Google’s research on what makes teams effective
An Elegant Puzzle by Will Larson - on org design for enablement
Conway’s Law - on the relationship between org structure and system design


