Make it real
Do it without waiting for someone to tell you to do it. Welcome to the 1%.
Asking why gives you direction. Choosing what matters keeps you focused. But don’t stop there — clarity without action is just wasted potential.
The value of your work isn’t in brainstorming meetings or in a perfectly written design doc. Those help, sure, but they’re just the baseline. The real value lies in what you ship to customers. That’s what moves the needle. That’s what creates impact.
And to ship, you need to act — to turn ideas into products, features, or whatever solves a customer’s problem. To get your hands dirty, learn what breaks, and make it better. Engineering, at its core, is a contact sport.
The most surprising thing? Failing at execution is easier than you think. Projects don’t fail because people are dumb — they fail because teams move too slowly, lose focus, overthink decisions, or move in the wrong direction.
At the same time, executing well isn’t hard. It doesn’t require supernatural powers. It’s mostly a handful of habits — focus, short feedback loops, fast validation, a bit of common sense, clear ownership — that anyone can learn.
But before any of that, there’s a first, simple step: move. The most impactful engineers I’ve ever worked with all shared one thing in common — a bias to action.
So, don’t wait for someone to tell you to do it. Once you know the why and the what, hesitation has no excuse left.
Build, ship, measure, and repeat
No matter how many hours you spend in meetings, how many design docs you write, or how many “strategic discussions” you have, one truth remains: reality always has the last word.
You can brainstorm for weeks. You can run every scenario in your head, whiteboard the perfect architecture, and write a design doc so detailed it could win a Pulitzer. But the moment your idea meets the real world, it will start behaving differently. Customers will use it in unexpected ways. Edge cases will crawl out of the shadows. Your “brilliant shortcut” will backfire. That clever abstraction will crumble under load.
Welcome to software engineering — where theory and reality have a complicated relationship.
You’ll never have perfect information. You’ll never fully predict user behavior, market response, or system failure. Every plan — no matter how good — is just a collection of educated guesses. Unless you have the crystal ball - and you don’t - you continuously navigate through uncertainty.
Don’t get me wrong. It’s not an invitation to try out random things without even thinking, just because you can’t predict the future. An idea that doesn’t make sense on paper will rarely turn into a brilliant solution in practice. Clear thinking before taking action can help you get to a solution. Experience and gut feeling can definitely be your allies. But at the end of the day, there will always be some unknown unknowns. No matter how hard you try, you will never completely eliminate them.
That’s why the most impactful engineers don’t obsess over being right — they obsess over learning fast. They know that every assumption is just a hypothesis until validated. So they build, ship, measure, and repeat. Fast.
Don’t guess, measure
Don’t guess, but measure. Validate your assumptions in the field — early and frequently. In essence, shorten the feedback loop until reality can’t hide from you.
Each loop tightens your understanding. You build something small, ship it, measure what happens, and adjust. Then you do it again. Like tightening a spiral around the truth. It’s an iteration, but with intent:
-
Do customers actually use it?
-
Does it solve the problem we thought it did?
-
Is our performance assumption still true at scale?
Amazon calls it working backwards. Tesla calls it rapid iteration. Startup founders call it not dying. The principle is the same: feedback beats foresight.
I know what you’re thinking: “Sure, but my project will take at least a quarter before a minimum viable version exists. There’s no reasonable way to validate assumptions sooner”.
Maybe. But probably not. In my experience, there’s always a way — if you’re willing to get creative. You don’t need to test the entire solution. You just need to test a slice of the risk. Instead of validating the whole product, validate the assumptions that could kill it.
Not convinced? Try these tactics:
-
Fake it before you make it
Before Dropbox wrote a single line of sync code, Drew Houston made a 3-minute demo video showing what the product would do — and watched signups explode. He didn’t need a product to test demand. He just needed evidence. -
Prototype the riskiest part
Building a distributed cache? Don’t design the full architecture. Mock the API and benchmark a simplified version. You’ll learn what breaks, what scales, and whether your idea even makes sense under load — in days, not months. -
Dogfood ruthlessly
Before public release, use your own product internally. Real usage exposes hidden friction and silly assumptions faster than any design review ever could. -
Shadow-launch
Roll out your feature to a small percentage of traffic, collect telemetry, and compare performance. Alternatively, mirror live traffic to the new version while users continue interacting with the old one. This lets you compare both solutions side by side — and see how the new system behaves under real-world load, without risking production stability.
Some of these tactics may feel risky in highly regulated or large-scale systems. In those cases, test assumptions in safe, controlled environments — the principle of early feedback still applies.
When you ship small, you make failure cheap and learning fast. You can afford to be wrong — repeatedly. Each iteration makes the next one smarter. It’s like compounding interest, but for learning. You need early signals, not late surprises. Every day you wait to get real feedback is a day you delay learning.
Learning quickly can completely change your course. In some extreme cases, the most successful products ever built began as accidental discoveries:
-
Twitter started in a podcasting company - you probably never heard of - called Odeo.
-
Slack was a failed game that turned into the team’s internal chat tool.
-
YouTube started as a dating site — “Tune In, Hook Up”. Yes, really.
The goal isn’t to reach the final solution faster. The goal is to discover the truth sooner — so you don’t spend three months building the wrong thing beautifully. Because nothing is slower than perfecting the wrong thing.
Don’t be afraid to experiment
Woodworkers have a saying: “measure twice, cut once”. In woodworking, when you cut a piece of wood, it’s gone. Forever. If you smooth, engrave, or plane a surface and make a mistake, there’s no undo button. There’s no way to revert the material back to its original state. You have to throw the piece away and start again from scratch. In the worst cases, a single wrong cut can mean rebuilding the entire product. Many decisions in woodworking are irreversible, which is why careful preparation and precision are not optional; they are survival. I imagine surgeons have a similar saying, too, though I’ve never been a surgeon.
Software engineering, on the other hand, is a different world. If you cut a piece of code, you can bring it back by reverting a git commit. If you add, modify, or remove a feature and later regret it, you can usually roll it back with relative ease. The whole product doesn’t need to be rebuilt from scratch. Of course, there are situations where changes become much harder to undo: corrupted or lost data, financial transactions gone wrong, or a bug that damages customer trust and brand reputation.
But those are exceptions. For the most part, decisions in software engineering are reversible. And this gives you a unique advantage that many other industries simply don’t have: the freedom to experiment and move quickly. You can keep your organization lean, make decisions fast, test ideas with minimal risk, and iterate again and again until you get it right — all while keeping the downside limited.
Jeff Bezos once put it this way in an interview: "Most decisions are two-way doors. If you make the wrong decision, if it’s a two-way door, you pick a door, you walk out, you spend a little time there. If it turns out to be the wrong decision, you can come back in and pick another door. Some decisions are so consequential - and so important - and so hard to reverse that they really are one-way door decisions. You go in that door, you’re not coming back - and those decisions have to be made very deliberately, very carefully."
The two-way door is a powerful decision-making framework. Think of reversible moves as low-cost experiments. Treat one-way moves like surgical procedures. Ask yourself a few key questions to determine whether it’s easily reversible, for example:
-
Will undoing this require more effort than doing it in the first place?
-
Does it involve customer data in a way that’s hard or impossible to restore?
-
Is there compliance, contractual, or regulatory exposure if it goes wrong?
-
Could it cause visible customer harm — lost money, a public outage, a privacy breach, or negative press?
-
Will this change break backward compatibility for existing users or systems?
If the answer to all of these is no, you’re looking at a two-way door. If even one answer is yes, then it’s a one-way door — or at least risky enough to treat it like one.
Move fast on two-way doors. Keep the decision process lean and biased toward action. Build, deploy, measure, learn, and iterate. Speed is the advantage here — use it.
Move slowly on one-way doors. Take the time to gather data, consult stakeholders, and analyze risks. Discuss thoroughly with the team, and give people enough time to digest and raise concerns. Don’t leave risks unaddressed before proceeding. Whenever possible, make the irreversible reversible: break the big change into smaller bets, add safeguards, and roll out gradually to reduce risk.
Don’t confuse speed with recklessness. Measure twice when you’re standing in front of a one-way door, and be surgical about the execution. For two-way doors, make small bets, set explicit rollback criteria, and iterate — fast. Build habits that make reversibility the default, practice to quickly spot which door you’re at, and get velocity with controlled risk.
Master napkin math
When designing or experimenting, napkin math is one of your best allies.
Mastering it gives you an edge — whether you’re comparing alternative solutions, estimating the impact of an optimization, or evaluating how much that shiny new service will cost to run in production.
You don’t need perfect numbers. You just need to be directionally correct — the right order of magnitude. If your napkin math tells you a new feature will cost $10,000 a month instead of $100, that’s enough to change your decision.
The goal isn’t perfection — it’s clarity. Napkin math helps you see what’s worth doing before you spend a week benchmarking or a quarter building.
Start with boundaries
Even if they’re unrealistic, define your best-case and worst-case scenarios. Those are your fences. You can’t do better than the best case. You can’t do worse than the worst.
For example, say you’re debugging a slow API that’s breaching your latency SLO. You’ve identified a slow function as the culprit. Your boundaries are:
-
Worst case: do nothing — keep the function as is.
-
Best case: remove it entirely — no computation, no latency.
Now, if removing the function entirely would only improve latency by, say, 20%, then even in the best case, it may not be worth the effort. Maybe caching the whole response or parallelizing the workload will get you a bigger win — and this simple napkin math can point you in that direction within minutes.
Boundaries keep you honest. They tell you whether an idea is worth more thinking or is already a dead end.
Know your invariants
Every system has constraints that won’t budge — at least not anytime soon. Know them.
-
Network round-trip time between regions
-
Disk I/O latency and throughput
-
Per-node CPU or memory cost
-
Database query fan-out limits
-
Eight bits in a byte (yes, some engineers still get confused)
These are your non-negotiables.
For example, say you’re estimating how long it’ll take to move 10 TB of data between two data centers. You might start with the 100 Gbps link between them and estimate around 15 minutes. But if your source data lives on network-attached storage capped at 10 Gbps, that’s your real bottleneck — in practice, it’ll take over two hours. You’ll be off by 10× if you miss that invariant.
Make educated guesses
Napkin math relies on intuition, but not imagination. If you have to guess, make educated guesses. Base them on real data: telemetry, customer usage, cost reports, or previous projects.
Say you’re estimating how much a new logging system will cost. You don’t know the exact volume yet, but you can look at current logs, estimate the average log line size, multiply by logs per day, and check what your provider charges per GB stored. Even if you’re off by 20%, that’s fine. You’ll still know whether it’s roughly $100 a month or $10,000 — and that’s all you need to decide if it’s worth continuing.
Be realistic, not idealistic
Napkin math is not wishful thinking. It’s reality testing. Don’t let optimism bias creep in just because you want the answer to look good. If you’re missing data, use past experience or comparable systems to stay grounded.
For example, say you’re designing a new in-memory cache. Your idealistic side might assume “cache hits will cover 90% of requests”. But based on similar workloads, you know 60% is more realistic. That 30% difference could double your application load — and blow your cost model.
Napkin math isn’t about confirming your hopes — it’s about protecting you from them. It’s reasoning under uncertainty — and the faster you can do it, the faster you can make smart calls without waiting for perfect data. It’s the difference between saying “let’s run a benchmark next week” and figuring out why it won’t work in a matter of minutes.
Don’t let the perfect stay in the way of good
Let’s be honest: we’re all perfectionists.
The architecture? It’s never clean for us. The code? There’s always one more refactoring to do. The tests? Never comprehensive enough. The UI? It could always be sleeker, smoother, more polished. The feature set? Well, there’s always one more “must-have” thing you could add.
Perfectionism is noble — it signals care, skill, and high standards. But if you always wait for the perfect solution — that, spoiler, will never arrive — you’ll never ship. Your product, feature, bug fix, or optimization will stay on your laptop or in your staging environment, while time passes, deadlines shrink, and stress rises. Nothing reaches your users, nothing delivers impact.
A couple of years ago, a software engineer from my neighborhood reached out after hearing that I’d co-founded a startup early in my career. He wanted feedback on a mobile app he was building — a local events aggregator for tourists (we live in a very touristic area). Within minutes, I pointed out a couple of obvious challenges: keeping event listings fresh and getting actual users. But he kept circling back to technical details — why his design was better, what framework he used, how his app “stood out”. I told him to just ship it that day, get real feedback from the market, and iterate. He nodded politely — then ignored me.
Months passed. Every now and then, I’d bump into him around town. The app was always “almost ready”. There was always one last feature, one last bug, one last improvement. Two years later, on a sunny afternoon, I saw him again and asked if he’d finally launched. He had. But it didn’t go as he expected. After two years of refining every pixel, the app was flawless — in his own mind. Unfortunately, the market didn’t care and none was using it. Chasing perfection hadn’t brought success. It only delayed failure.
Many successful products aren’t technically flawless. They’re not always the fastest, most elegant, or most complete. But they shipped sooner than the alternatives, hit the market when users were ready, and solved a real problem with a good enough solution. Perfect? No. Successful? Absolutely.
This isn’t an invitation to build sloppy, half-baked software. It’s a call to understand the trade-off between perfect and done. It’s a call to ship the good enough, even if there are ten more ways to make it better. It’s a call to embrace time to market as a strategic advantage.
Shipping imperfect work — and iterating based on real feedback — is where impact lives. Done beats perfect. Every time.
Stay humble
Last, but not least: stay humble.
You have to accept that your brilliant idea might flop, that your assumptions might be wrong, and that your customers might not care. And you have to be okay with that — as long as you learn something valuable before the next build.
Sometimes, the entire premise of a project is wrong, and the best course of action is simply to cancel it. You have to be ready to let go. To stop insisting on that “brilliant” idea that doesn’t work. No matter how elegant it looks on paper — if it doesn’t work in the field, it doesn’t work. Move on.
A few years ago, I worked with an engineer who spent over a year on a database optimization project he was deeply passionate about. The idea made perfect sense during the design phase and got the green light from the team. But when it finally reached production, the results were underwhelming. The optimization helped only a tiny subset of queries — too few to make a real impact.
Instead of recognizing that the project’s premise was flawed, cutting his losses, and moving on, he doubled down. More months passed, results didn’t improve, frustration grew, and eventually burnout followed. From zero to burnout, it took just over a year — a year spent defending a failing idea, not a failing implementation. A faster feedback loop — and a bit more humility — could have saved time, money, and a career setback.
The hard truth is that the longer you’ve been working on something, the harder it becomes to walk away.
Once you’ve invested quarters of work, sunk costs and pride kick in. Backing out feels like a personal failure — maybe even a hit to your reputation. And yes, it might sting in the short term. But spending even more time on something that doesn’t work only multiplies the pain.
That’s why validating ideas early is so powerful. If you test assumptions in days instead of quarters, even with a scrappy prototype, you de-risk both the project and yourself. It’s much easier to kill an idea that’s a few days old than one that’s been part of your identity for a year.
People sometimes tell me they see only the successful projects I’ve led — the ones that made a visible impact — and assume I just keep picking winners. What they don’t see is that behind every success, there are nine other ideas that didn’t take off. Some were tested for a week, others for a single day.
But that’s the whole point. You don’t need to be right 100% of the time. You can fail on 90% of your ideas — as long as you fail fast and redirect your energy toward the 10% that actually work.
Nobody will remember the nine days - not even nine weeks - you spent testing ideas that went nowhere. They’ll remember the nine months you spent building the one that mattered. But if you keep pushing a dead idea just because you can’t admit it was wrong — that’s what people will remember, and that’s what will stall your career.
This work is licensed under CC BY-NC 4.0