The Estimation vs. Computation Dilemma
How to Decide What to Measure?—?and When
Should I try to measure it, or should I guess?
I’ve been wrestling with this question for years, going back to when I started building systems without neatly written requirements. Each time I needed to define a feature or decide how something should look, I’d ask that same question.
At first it was a minor hassle, but once I started CloudFruit in 2022, the dilemma blew up 10x. Then we launched HiiBo, where building a product and a business at the same damn time turned that tension up another 10x.
I’m talking about the Estimation vs. Computation dilemma: deciding what to measure precisely, and what to approximate to keep momentum.
If you measure too much, you risk drowning in analytics overhead. If you guess too often, and you risk building on shaky ground.?
It’s a predicament that will tickle your brain when there’s no pre-existing blueprint?—?just you, your uncertain specs, and a ticking clock.
Guess or?Measure?
Plenty of tasks don’t justify real measurement early on. You might have a rough sense of how big a dataset is, or how many users might sign up in the first month.?
So you guess, good enough to avoid catastrophe. You skip building a full analytics pipeline because you’d rather push the product forward.
But certain areas demand accuracy. If you guess the cost of an external API and undershoot by a factor of five, you could blow your budget the moment real traffic hits. Guessing is cheap until the error margin can kill your business. Oopsie.?
Why is this so relevant right now? Because more new businesses adopt an “analytics-first” mindset, building data-collection frameworks before they even confirm the product’s viability. That can slow everything down.
Now this is coming from an analytics guy. I started doing analytics in 2009 and I’ve touched just about every data analysis tool / BI tool in existence. I’ve been writing SQL for 15 years and building actionable analytics for about 13.
Every single organization I’ve worked for and/or been a part of has had the problem of trying to overanalyze. I do it myself sometimes, because I’m scared to guess.?
But let me show you the rewards of delaying your gratification and measuring at the right times…
The HiiBo Case: Building a Social Dashboard
We want to show traction, both for investors and to flex a little publicly.?
So we are building a “social dashboard” with all our follower counts, engagement rates, and so on, across various channels:
First of all note three things?—?
领英推荐
An ideal end state is a dashboard that auto-updates from each platform’s API. Great in theory. But implementing that is no small build?—?lots of connectors, lots of potential breakpoints. So we are starting small: we picked a data-visualization tool (Looker Studio) and manually input data to confirm we actually need everything we think we need.?
By creating simple metrics and putting them into a simple, easily usable tool, we can focus on the visualization and display while still retaining scalability. We need to consider the audience here, nobody is going to be doing deep dives into this data right now except us. We are building simple, clean, presentable metrics that clearly demonstrate our growth.?
Perhaps you can see the impact of the Estimation vs. Computation dilemma here: we could go full automation from day one, but that might stall progress. Instead, we do a partial approach?—?light on computation, heavier on “see how it feels.” That way, we get a quick read on whether the metrics are actually helpful.?
Where Computation Matters
Sometimes a guess isn’t safe. At HiiBo, we are an LLM-Neutral AI, meaning we integrate with various LLMs (openAI, Claude, etc). Those integration costs can’t be a guess.?
If we underestimate token usage, we might sell a subscription that’s break-even or negative-profit at scale. We had to do real math?—?understand call frequency, typical user lengths, token overhead. Because a big mismatch between guess and reality could bury us in overhead costs.
So we rolled up our sleeves, counted projected usage, tested different load scenarios, and decided HiiBo Alpha will be $25 a month (coming 6/9/25 with discounts for pre-purchase). That’s a result of actual computation, not “eh, we’ll see.” Because a slip-up there could wreck the entire product line.
Some tasks demand thorough measurement, some demand heuristics. The trick is identifying which is which. If your margin for error is small, do the math. If your margin for error is large?—?and time is short?—?guess!
Final Musings
When building from scratch?—?like I did at CloudFruit, then again at HiiBo?—?you don’t have existing data or existing tools. You can easily get seduced by building a perfect measurement system, trying to glean clarity from every angle. But that devours resources. Meanwhile, the real product lags.
Early on, it’s often better to produce partial solutions quickly. Hard-code some data. Accept approximate knowledge. Then refine only if the data or insight it provides is mission-critical. Because whether you’re launching an ERP or an AI platform, you can’t “perfectly measure” your way to success. You can only keep shipping.
Don’t measure everything from the start?—?the overhead could bury your momentum. But don’t guess when the stakes are existential. It’s a tug-of-war every time, especially when you’re building something new. We are always trying to dance between the two mindsets?—?estimate where the margin of error is acceptable, compute where we can’t afford to be wrong.
It’s a simple philosophy in principle, but not always in practice. Then again, that’s the nature of building something that doesn’t exist yet. You juggle partial knowledge and tight deadlines, weaving guesses and data into a functioning system. In the end, progress beats perfection?—?and survival outranks everything else.
About the Author
Sam Hilsman is the CEO of CloudFruit? & HiiBo. If you want to invest in HiiBo or oneXerp, reach out. If you want to become a developer ambassador for HiiBo, visit www.HiiBo.app/dev-ambassadors.