Strive for clarity
Give me six hours to chop down a tree and I will spend the first four sharpening the axe.
“Bias to action” is one of the most powerful traits you can develop as an engineer. It gets ideas out of your head and into the real world. It creates fast feedback loops. It exposes wrong assumptions, reveals the unknown unknowns, and gives you the only data that actually matters: how things behave in reality, not in your imagination.
But bias to action is often misunderstood.
It’s not a call to move blindly. It’s not permission to try random things until something magically works. It’s not “just build it” without turning on your critical thinking.
Bias to action is not an excuse to skip thinking altogether — to sprint into the codebase before understanding the problem, the trade-offs, or the simplest path forward. Don’t confuse speed with progress, motion with impact. Shipping fast is only useful if you’re shipping something that matters, something aligned with reality instead of your first impulse.
Clear thinking is the skill of seeing a problem as it really is — separating facts from assumptions, understanding constraints, and making deliberate trade-offs before you move.
Engineering is almost never about finding the perfect answer — it’s about choosing the right trade-offs. You rarely get fast, cheap, and simple at the same time. Clear thinking means deciding what you’re optimising for, what you’re willing to pay for it, and what you’re willing to let be “good enough” for now.
Clear thinking before acting isn’t the opposite of bias to action — it’s what makes bias to action work. Acting fast makes an impact only if you’re heading in roughly the right direction. Clear thinking gives your bias to action a direction.
And today, in a world where generative AI is making it easier than ever to write code, clear thinking is no longer optional — it’s your leverage. The bottleneck is no longer typing. It’s reasoning. It’s judgment. It’s the ability to zoom out before you zoom in, to understand the problem enough that your first iteration is already pointed in the right direction.
Clear thinking is not paralysis analysis. It’s not waiting for perfection. It’s the opposite: it’s the discipline of slowing down just a bit, enough to choose a good angle before you take your next step. It’s catching the idea that doesn’t make sense before you pour days into it. It’s noticing the simpler angle hiding in plain sight before you build the complicated one. It’s using your brain before you use your keyboard.
This chapter is about building that discipline — the set of habits that can help you think with more clarity, creativity, and precision before you act. It’s about using your mind as your highest-leverage tool, especially in a world where everyone suddenly has access to infinite keystrokes.
Think, then act
In 2025, I was working on Grafana Mimir 3.0, a new major version of our time series database at Grafana Labs. One of its biggest changes was a new architecture that decoupled ingestion and query paths, putting Kafka in between.
When paired with a Kafka-compatible backend running on top of object storage – like Warpstream – it suddenly became possible to run Mimir across multiple availability zones (AZs) without paying cross-AZ data transfer for the data plane. And the data plane is where most of the bytes are transferred. At around $0.02/GB in major cloud providers, cross-AZ transfer can easily become the single largest cost of running Mimir in a multi-AZ setup.
But even if the data plane stays in-zone, the control plane still needs to sync across AZs, and that part isn’t free. Mimir uses a gossip-based protocol for the control plane. Gossip is beautifully resilient to node failures, but it can be hilariously inefficient from a networking perspective.
A gossip update doesn’t spread once; it fans out multiple times. To transfer a 1 MB change set, you might end up pushing 10x, even 100x, that amount across the cluster, depending on node count and desired propagation speed. On a medium-sized Mimir cluster, you can easily hit a few tens of MB/s of cross-AZ transfer just for gossip.
Now scale that out. Suppose you run Grafana Cloud and you have 100 Mimir clusters, each doing 10 MB/s of cross-AZ data transfer. At $0.02/GB, that’s roughly $50K per month, or $600K per year — money that could fund a team of three or four people. Stakes are high.
We had an idea to fix it: split a multi-AZ Mimir cluster into per-AZ clusters, keep gossip isolated within each AZ, and then use a more efficient synchronization method between AZs that avoided write amplification. Conceptually simple — but implementation-wise? Fragile, complex, and one bug away from split-brain. Doing it robustly would take weeks.
The idea worked on paper, but my gut hated it. Too many moving parts. Too fragile. Too much that could go wrong. But after brainstorming with the team, it was the only working idea we had.
That’s when I forced myself to slow down. Instead of jumping into code and heroically “just building it”, I closed my laptop, got on my bike, and went out for a ride.
At first, I stopped thinking about it. I just enjoyed the sun, the nature, and the silence. Then, as it often happens, the problem came back to me — but with space this time, not pressure.
And suddenly it hit me: this wasn’t a cluster-splitting problem. It was a networking routing problem. We didn’t need per-AZ clusters. We didn’t need a second synchronization mechanism. We didn’t need to redesign the control plane. We just needed gossip to be zone-aware.
All we needed was to control how gossip messages propagate: pick a few nodes responsible for cross-AZ synchronization, let them push updates across zones, and let regular gossip handle everything inside each AZ. No changes to the protocol — just a smarter strategy for choosing how updates flow.
Once that clicked, the whole thing became trivial. I got home, opened my laptop, and wrote the patch, which was just a few dozen lines of code. Four working hours later, it was deployed to a development cluster.
The result was impressive: a 95% reduction in cross-AZ data transfer, with no significant impact on propagation latency. The solution wasn’t a multi-week engineering effort. The solution was an hour of coding, but I couldn’t see it earlier because I didn’t have clarity.
This story is just one example, but in hindsight, I’ve solved many hard problems the same way: stop, let the idea digest, and let clarity form before touching the keyboard. And I’ve seen the same pattern in many impactful engineers I’ve worked with.
Thinking before acting doesn’t mean moving slower. It doesn’t mean endless analysis or getting stuck in your head. It means refusing to remove thinking from the process. Good engineering isn’t just about execution — it’s about clarity before execution.
Now, I know what you might be thinking: “If I had to wait for clarity every time, I’d never get anything done.” I hear you. And you’re right.
Sometimes you have to act without perfect clarity, ship something, and let understanding catch up later. That’s fine. But that should be the exception, not the default. Strive for clarity first. Only fall back to building without it when you truly must.
So, what techniques can you adopt? Here are a few that have proven powerful over time.
Write it down
Write down the problem statement, the current context, the options you’ve considered, and what you think about each. Even if you start from an existing specification or a GitHub issue, you should still write down how you understand the problem and the context.
Not to produce documentation, but to produce clarity.
The moment you try to articulate a problem precisely, with your own words, you discover all the parts you don’t actually understand. Thoughts that felt “clear enough” in your head suddenly look vague, contradictory, or incomplete when written down. Writing forces your brain to stop bluffing.
This isn’t just a personal preference. Richard Feynman famously said that “if you can’t explain something simply, you don’t really understand it”. Writing is how you force that simplicity. It exposes fuzzy reasoning and missing assumptions instantly.
Daniel Kahneman, in Thinking, Fast and Slow, explains that our brain loves to take shortcuts — jumping to conclusions, smoothing over gaps, and confidently believing half-formed thoughts. Writing breaks that spell. It switches you from fast, intuitive thinking into slow, deliberate thinking, where you’re forced to confront the actual logic of what you’re saying.
Derek Sivers captured it perfectly in his blog post How to ask your mentors for help:
I have three mentors.
When I’m stuck on a problem and need their help, I take the time to write a good description of my dilemma, before reaching out to them. I summarize the context, the problem, my options, and thoughts on each. I make it as succinct as possible so as not to waste their time.
Before sending it, I try to predict what they’ll say. Then I go back and update what I wrote to address these obvious points in advance. Finally, I try again to predict what they’ll say to this, based on what they’ve said in the past and what I know of their philosophy.
Then, after this whole process, I realize I don’t need to bother them because the answer is now clear.
If anything, I might email to thank them for their continued inspiration.
Truth is, I’ve hardly talked with my mentors in years. None of them know they are my mentors. And one doesn’t know I exist.
This is exactly what happens when you write: you become your own mentor. You force yourself to think clearly, to be specific, to justify your reasoning. And very often, the answer quietly appears while you’re writing.
This doesn’t mean you should write a multi-page essay for every three-line bug fix. Writing has a cost too, so keep it proportional. For small problems, a few bullet points in a scratch file are enough. For big, ambiguous, or expensive problems, go deeper. The goal isn’t pretty prose — it’s to invest a little time to get clarity now so you don’t waste a lot more time later.
Avoid cognitive traps
Your brain is a magnificent machine — fast, intuitive, creative — but your subconscious can also steer it in subtle, predictable ways. These are cognitive biases.
A cognitive bias is a systematic error in the way your brain interprets information. It’s your mind taking a shortcut — fast, effortless, but potentially wrong. These shortcuts can become traps. They distort your reasoning, hide better options, and nudge you toward the wrong conclusion with total confidence.
You can’t eliminate biases — nobody can. But you can put guardrails in place: learn to spot when they show up, pause for a moment, and adjust your thinking before they lead you astray.
What are the worst offenders? These aren’t the only cognitive biases, but they are the ones I’ve seen derail engineering decisions most often:
-
Confirmation bias
You unintentionally search for evidence that supports your idea and filter out anything that contradicts it. This is often paired with "wishful thinking": believing something is true because you want it to be true, not because evidence supports it.
For example, you’re convinced a latency spike is caused by a slow database query. You inspect the database metrics and — unsurprisingly — you find some slow queries. That seems to confirm your theory, so you stop investigating. Optimizing those queries might help, but you completely miss the actual root cause: an under-provisioned caching layer in front of the database, evicting entries too quickly. -
Optimism bias
You assume the best-case scenario is the most likely scenario. You underestimate timelines, overlook risks, or believe nothing significant will go wrong.
For example, in the late ’90s, Netscape had a dominant browser called Navigator. It wasn’t perfect — the codebase was messy and hard to maintain — so leadership decided to rewrite it from scratch as Netscape 6. The engineering team believed the rewrite would be faster than fixing the existing codebase and would produce a far superior browser. Instead, the rewrite dragged on for three years, during which the company effectively stopped improving the current version. By the time Netscape 6 finally shipped, it was slower, buggier, and too late. Meanwhile, Internet Explorer surged ahead, and Netscape lost market share so quickly it never recovered. -
Availability bias
You give too much weight to the first example that comes to mind, even if it’s irrelevant. Your brain grabs the explanation that’s easiest to recall, not the one that’s most likely true.
For example, a friend of mine — working at a large tech company — once told me a story. During a global high-latency incident, the on-call engineers immediately blamed the CDN. It had caused two major outages that quarter, so it was the first suspect that came to mind. They spent an hour digging into CDN configs and cache-hit ratios — until someone finally checked the backend RPC fan-out graphs and discovered a performance regression in the latest software release. The CDN was fine; availability bias had simply pointed everyone in the wrong direction. -
Overconfidence
You trust your assumptions, intuition, or past experience more than actual evidence. It shows up as skipping verification steps, underestimating risks, or believing “this code can’t possibly run” or “this change is safe” without checking. It makes you rely on gut feeling instead of data, and ignore signals that may contradict your first impression.
For example, in 2012, Knight Capital deployed an update to its trading system, but one of the eight servers didn’t receive the new code. That server still contained a dormant feature called Power Peg, an internal test program — unused since 2003 — that aggressively bought high and sold low. Alongside the software update, a configuration change was rolled out to all servers, including the one running the outdated code. Unfortunately, the configuration reused a feature flag that Power Peg had once depended on, unintentionally reactivating the old logic. When trading opened, the legacy code began firing orders uncontrollably, flooding the market with millions of unintended trades. In just 45 minutes, Knight Capital lost $440 million. An SEC investigation later pointed directly to bad engineering practices and overconfidence: manual deployments, unchecked assumptions, and “dead” code that everyone believed could never run again.
Cognitive biases often hide behind feelings. If you notice any of these signals, treat them as red flags:
-
You feel certain about something without checking the data.
-
You catch yourself defending your idea instead of testing it.
-
You dismiss alternative explanations too quickly.
-
You keep repeating the same hypothesis even though the evidence is thin.
-
You’re hoping reality behaves like your plan.
If you notice any of these, pause. You’re likely thinking with your fast brain, not your clear brain.
So, how can you avoid these traps? A few simple techniques:
-
Pretend you’re wrong and force yourself to prove it:
-
If someone on my team strongly disagreed with this approach, what argument would they use?
-
What assumptions am I treating as facts, and what happens if just one of them is false?”
-
If this idea failed in production, what would the postmortem say?
-
-
Ask someone else to challenge your reasoning:
-
What am I missing?
-
Where could this fail?
-
Which part of my solution is the most fragile?
-
-
Turn pessimist — ignore best-case thinking:
-
What’s the worst-case outcome?
-
What could break at scale?
-
What happens if the rollout doesn’t go as planned? What would reverting look like?
-
-
Check reality:
-
Which metrics, logs, or traces can validate (or invalidate) my assumptions today?
-
Have we attempted something similar before? How did it go, and why?
-
How long did the last migration actually take? Given our current scale, is our estimate realistic?
-
Looking at past incidents, what went wrong in analogous situations?
-
Biases won’t disappear — they’re part of being human. But once you learn to spot them, you take back control.
Clear thinking isn’t about being perfect. It’s about catching the trap before it catches you — and making decisions based on reality, not instinct.
Brainstorm with others (AI included)
You’re smart, but you’re also biased, limited, and occasionally blind to the obvious. Welcome to being human.
One of the most impactful ways to gain clarity is to talk through your ideas with someone else. A peer, a teammate, someone from a different team, someone with more experience, someone with less experience — it doesn’t matter. What matters is a different perspective.
Diversity of thinking, background, and experience is the whole reason teams exist. If everyone thought and acted exactly like you, a team wouldn’t have more value than a single engineer; it would just be more expensive. The magic of a team is that the value becomes greater than the sum of the parts. Different brains see different things. Someone spots an assumption you didn’t know you were making. Someone else sees a simpler path because they solved a similar problem before. Someone new to the team asks a “stupid” question that reveals complexity you’ve stopped noticing.
Brainstorming works best in small groups. Two to four people is usually ideal. In big groups, people talk less, hold back their weird-but-useful ideas, and focus more on sounding smart than thinking clearly.
This isn’t just feel-good team culture — it’s well studied. Pixar built a whole creative process around this idea with their Braintrust meetings: bright, diverse minds challenging a story from different angles, to reveal blind spots. Scott Page, in The Difference, goes even further: diverse groups often outperform groups made of “the best” individuals because they explore more of the solution space. They literally see more.
You don’t brainstorm with others to get the answer. You brainstorm to expand the problem space, uncover blind spots, and challenge the narrative forming in your head.
And this applies not just to humans.
I’ve found brainstorming with AI surprisingly useful — not because the AI magically produces perfect solutions, but because it forces me to explore the problem from angles I wouldn’t naturally consider. Some of those angles may be wrong. Some may be naive. Some may reveal misunderstandings. But wrong angles still spark new ideas.
AI is like an endlessly patient early-career engineer with infinite stamina: it throws ideas at you — some good, some bad, many forgettable — but enough interesting ones to help refine your thinking. You’re not outsourcing the problem; you’re outsourcing the angle-hunting. And sometimes that’s exactly what you need.
You don’t need a full brainstorming meeting for every trivial issue. A five-minute chat or a quick back-and-forth with an AI is often enough.
What matters is widening the angle, not chasing every idea. Talking through an idea — with a teammate, with an AI, or both — often reveals the clarity you couldn’t find alone.
Let ideas incubate
If writing things down and brainstorming with others still doesn’t make the problem or the solution space clear, resist the urge to start coding right away. Pause, and step away.
Disconnect. Take a shower. Ride your bike. Run in the park. Walk your dog. Call a friend for an apéritif. Or simply switch to a different task or project — just stop actively thinking about your problem and let your subconscious do the work.
Two things happen when you do this:
-
Perspective kicks in.
That “urgent” bug might not be so urgent. Problems often shrink when we step back and let our subconscious sort them out. Have you ever had a problem that felt like a mountain yesterday but looks like a molehill today? That’s perspective at work. -
Clarity returns.
When you come back, your mind is fresh. Neuroscientists call this “incubation”. Your brain continues processing the problem in the background, even when you think you’ve stopped working on it. Suddenly, solutions you couldn’t see before reveal themselves.
Sometimes the solution clicks after a few minutes; other times it takes longer. But the result is almost always the same: when clarity comes, the implementation feels effortless. You just know what to do.
I know what you’re thinking: “My project is late! I can’t afford a break!” But hear me out. When you’re stuck, you’re not working — you’re spinning in circles. The code won’t fix itself, and neither will you by rerunning the same thought process ten times.
It feels counterintuitive. It feels like a waste of time to “wait for inspiration” while sipping coffee or working on another project. But you will spend that incubation time anyway. If you rush into building without a clear understanding, you’ll spend days rewriting, debugging, and second-guessing. That’s like a forced incubation.
So when you can, give your ideas room to breathe. Step away. Let them ferment.
Breaks aren’t laziness — they’re strategy.
Start now, but with a clear step
Clear thinking isn’t a luxury. It’s not something you do when you “have time”. It’s the foundation of impactful engineering in a world where writing code is easier than ever, but choosing the right code to write is harder than ever.
The techniques in this chapter — writing things down, spotting cognitive traps, brainstorming with others, letting ideas incubate — are not rituals. They’re safeguards. They’re how you stay aligned with reality. They’re how you keep your work pointed in the right direction instead of sprinting into a wall at full speed.
Clear thinking prevents that.
You don’t need perfect clarity — you need enough clarity for the size of the bet you’re making. A good rule of thumb: if you can explain the problem, the constraints, your chosen option, and its main risks in a few clear sentences (or a five-minute conversation), you’re ready to act. If you can’t, you’re probably still guessing.
So before your next big decision — before you open a large pull request, approve a design doc, start a rewrite, or ship a risky change — pause for a moment. Take one clear step before you take ten fast ones. Run your idea through the habits in this chapter. Sharpen your axe before you swing it.
Because in the long run, the engineers who make the biggest impact are the ones who think clearly — and then act with a fast feedback loop.
This work is licensed under CC BY-NC 4.0