Signal, Noise, and Scale
The Economics of Information and Judgment
One Takeaway: Growth Isn’t One Sided showed that different problems need different thinking. Sometimes problems need systematic approaches while others need adaptation. Economics provides insights that can explain how to tell the difference. It can help us understand why and how companies can waste resources applying the wrong approach to the wrong challenges.
Scalable solutions work for some business challenges while others need non-scalable approaches. Companies that try to scale everything can miss opportunities that need nuance. Companies that customize everything struggle to achieve efficiency.
This creates a critical challenge for growing organizations.
How do you tell which problems will need optimization and those that need adaptation?
Why do sophisticated analyses sometimes produce worse results than simple rules of thumb?
How can you build an organization that excels at both scalable and non-scalable problem-solving?
Economics can help here. For decades economics has taught that not all information is the same. Knowing this can help you match your problem-solving method to your information environment.
Two Ways to Solve a Similar Problem
When Starbucks expands to a new US city, they use a highly analytical, scalable approach. They use things like demographic analysis, traffic pattern studies, and real estate algorithms. All with the goal of understanding foot traffic and sales potential. They optimized their site selection model has across thousands of locations. More data makes the model better. Each new store provides information that improves results for the next one.
When Starbucks entered China, the same approach failed at the beginning. They lost money for the first nine years. Their model did not capture things like tea culture, differing social patterns, or competitor dynamics.
They couldn’t just apply their proven formula. They needed local market experimentation. They needed partnerships with regional operators who understood context. They needed to adapt the Starbucks concept itself to different cultural preferences. This was a fundamentally non-scalable approach that required judgment rather than optimization.
So. What’s the difference between these two situations?
It’s not that China is more complex than the US.
It wasn’t the case that Starbucks lacked analytical capabilities.
The difference is the signal-to-noise ratio in the information available for decision-making.
For US expansion: thousands of previous locations provided high-signal data about what works. Statistical patterns were reliable. More analysis genuinely helped.
For China entry: zero comparable precedents meant low-signal information. Historical patterns from different contexts provided noise, not signal. More analysis of US data wouldn’t have helped because the underlying relationships were different in China.
Understanding when you have signal versus noise is under-rated and mis-understood. Being aware of this allows you to match your approach correctly. It helps you know when it’s more appropriate to use calculation or judgment to make decisions.
Companies that are able to make this distinction scale effectively. Whereas those that rely on sophisticated analysis to problems that need judgment stall.
Signal, Noise, and Decision-Making
Businesses encounter two fundamentally different types of information: Signal and Noise.
Signal represents reliable patterns that predict future outcomes with reasonable accuracy. Seasonal demand cycles, established customer preferences, and price change data provide signal. Signal allows analytical approaches to be effective.
When you have signal, the calculation approaches from When to Calculate and When to Judge help optimize decisions. This is because past patterns do a reliable job of predicting future performance.
Noise represents random variation that appears meaningful but doesn’t predict anything useful. One-off events, shifting competition, changing customers, and new regulations don’t provide reliable guidance for future decisions.
When you’re dealing with noise, judgment approaches help you deal with uncertainty. In these cases, optimization approaches can actually make performance worse.
Why Signal-to-Noise Ratios Determine Scalability
The signal-to-noise distinction is a question of statistical inference under sampling constraints.
Consider two market entry decisions:
Entering your 50th US city. You have 49 previous market entries to learn from. Even if each market is unique, you have enough data to identify patterns that generalize. Which demographic indicators predict success? What store formats work in different density areas? How do local competitors typically respond?
With 49 data points, you can distinguish patterns from randomness. If your analysis says “cities with X demographic profile succeed 80% of the time,” that’s statistically meaningful. You have high signal relative to noise.
Entering your first international market. You have zero comparable samples. Even if you analyze the new market, you can’t distinguish which differences matter from which are random. Is the difference in coffee consumption patterns meaningful or misleading? Will competitive dynamics mirror your home market or differ?
With zero comparable samples, you cannot distinguish signal from noise. Any pattern you think you see might be coincidence.
Scale creates sample size. Sample size enables reliable statistical inference. Statistical inference justifies optimization.
When you’re doing something for the first time (or the first few times), you do not have high signal. You don’t have enough samples to distinguish patterns from noise. More sophisticated analysis doesn’t help. The limitation is statistical, not analytical.
The scalable/non-scalable distinction maps directly onto signal versus noise.
Scalable problems have strong signal-to-noise ratios. Supply chain optimization, pricing in mature markets, and process improvement all involve patterns. These patterns persist across contexts and time periods.
The operator and refiner functions excel at scalable problems. Their systematic approaches can identify and improve reliable relationships.
Non-scalable problems have low signal-to-noise ratios. New market entry, innovation, and differentiation all have unique contexts and characteristics. Analytical approaches can’t capture these effectively. Reliable statistical inference is impossible in these situations. This means judgment and experimentation are the only rational approaches for these problems.
The creator function excels at non-scalable problems. Their experimental approaches can discover what works without requiring predictable patterns.
Most scale-ups waste resources by treating noise like signal. They are quick to apply optimization approaches to situations that need adaptation. This happens even when they don’t yet have the sample size for reliable patterns to emerge.
How Problems Move From Non-Scalable to Scalable (And Back)
The scalable/non-scalable distinction isn’t permanent. Problems can evolve as you gain experience and as conditions change.
You are on a learning (and data) curve. Your first market entry is non-scalable. You don’t have enough data to optimize, so you rely on judgment. You try different approaches, see what works, and learn from failures.
But each new market entry provides data points. By your 10th entry, patterns begin to emerge. Maybe by your 50th entry, you have enough signal to start optimizing. You can identify which variables matter and build models that predict performance.
Experimentation converts unknowns into data. That data enables analysis. That analysis enables optimization.
But conditions can change. When Uber’s market entry playbook worked for US cities, they had a scalable approach. When they entered India, or Brazil, or China, existing patterns didn’t apply. Local dynamics, regulations, and competition differed fundamentally.
The problem became non-scalable again. Some (not all) signal they’d accumulated in the US became noise in other countries. They needed new experimentation to generate new market-specific signal before they could optimize.
Things like technological shifts, regulatory disruptions, or innovations can invalidate built up signal. When underlying relationships change, historical data stops predicting future performance. Scalable problems become non-scalable again. You then have to accumulate new signal under the new conditions.
This leads to an important strategic point. You will have to invest in experimentation and judgment when building signal. This is true whenever you are early in a new domain or entering unfamiliar territory. The goal is to learn, not optimize. Success metrics should focus on how much and how well you’re learning per action.
You then can shift to optimization as signal accumulates (scaling a proven model). The goal becomes extracting the most value from identified patterns. Success metrics should focus on improvement.
But, you must be ready to shift back to experimentation. Conditions can, and, do change. Accumulated signal can become old news. In these cases, organizations that keep optimizing based on outdated patterns only become efficient at one thing. Doing things that no longer matter.
This explains why set-in-stone organizational designs fail. Companies optimized for scaling proven models struggle when disruption makes their models obsolete. They’ve built systems for exploiting signal, but they’ve lost the capability for generating new signal. When their accumulated knowledge becomes obsolete, they can’t adapt.
Two Types of Knowledge, Two Types of Problems
Philosopher Michael Polanyi’s insights are essential here. Polanyi identified the difference between explicit and tacit knowledge. This distinction also maps perfectly to scalable versus non-scalable business challenges.
Explicit knowledge can be written down, transmitted, and analyzed quantitatively. Things like financial data, documentation, market research, and operational metrics represent explicit knowledge. All these are able to travel well through organizations. This type of knowledge supports scalable approaches. You can add it up, compare it, and optimize it across different contexts.
Tacit knowledge cannot be fully explained or said out loud. But, it does guides effective judgment. Things like sentiment, market timing, competitive intuition, and operational “feel” represent tacit knowledge. This knowledge exists in the experience and understanding of people doing the work. This knowledge supports non-scalable approaches. It provides context and insight that you can’t capture in formal analysis, but often determines success or failure.
Another, more descriptive, term for this is “embodied knowledge.” This exists in how people recognize patterns. It’s about how we make judgments rather than in documentable facts and relationships.
Why Tacit Knowledge Matters More for Non-Scalable Problems
Explicit knowledge captures patterns that can generalize (scalable). Tacit knowledge captures patterns that don’t (non-scalable).
When you document a sales process, you’re converting tacit knowledge to explicit knowledge. This works when the pattern is stable enough to generalize across contexts. You can write down:
“Ask these questions in this order. Address these common objections this way. Close using this approach.”
But when each sales situation is unique, documentation often fails. The patterns that predict success might include subtle customer cues, timing, or context. Things like:
Do they lean forward or back when discussing price?,
When to push versus when to give space
How does this deal fit their broader company politics?
These are things experienced salespeople recognize but can’t fully articulate. You learn these things by doing not by studying or analyzing in a formal sense.
This is why scaling sales through documentation and training has limits. You can document the generalizable patterns. But you can’t document the contextual patterns. At least not all of them. And even if you could, it would likely be too expensive or take too much time.
As companies grow, they get better at managing explicit knowledge. They build up improved data systems and analytical muscle. They build dashboards, create documentation, establish best practices, and create training programs.
But they often lose access to tacit knowledge. Distance ends up growing between decision-makers and front-line experience. The VP of Sales has never met most customers. The product team doesn’t observe how features actually get used. Strategic decisions get made only based on reports and metrics (explicit knowledge). The tacit understanding of context gets filtered out because it doesn’t fit in a standardized report.
This creates organizations that can analyze scalable problems well. But, they struggle with non-scalable challenges. They’re excellent at optimizing what can be documented and measured. But, they’ve lost the capability to recognize and respond to what matters in specific contexts.
Building Organizations That Preserve Both:
Codify what generalizes. Convert tacit knowledge into processes where patterns are stable enough to document. When you’ve identified approaches that work consistently, capture them in systems that let you scale the pattern.
Preserve access to tacit knowledge. Keep decision-making authority with people who have contextual insight where patterns don’t generalize. When success depends on context-specific factors, ensure those with direct experience make decisions. Don’t rely on only distant analysis.
The mistake most scaling companies make is trying to convert all tacit knowledge to explicit knowledge. They create comprehensive documentation, detailed processes, and centralized decision-making based on data systems. These are fine and can be helpful. But, just because these approaches can lead to success, it doesn’t mean they’re always appropriate.
The solution isn’t to avoid documentation and systematization. It’s about first recognizing which knowledge can be effectively centralized (high-signal, generalizable patterns). And second, recognizing which needs to remain decentralized (low-signal, context-specific patterns).
Why Optimizing Based on Historical Data Can Make Performance Worse
Economist Robert Lucas won the Nobel Prize in 1995. He won, in part, for identifying a problem that explains why data-driven strategies may fail at scale. In simple terms, the “Lucas Critique” states that statistical relationships change when you act on them. (This may be an oversimplification, but it is helpful in this context.)
The Lucas Critique in Business Contexts
Example 1. Let’s say your customer acquisition funnel shows that email campaigns convert at 5%. You build a model predicting that doubling email volume will double conversions. But when you put it in place, conversion rates drop to 2%. Customer behavior changed in response to higher email volume. People who used to read your emails now mark them as spam or unsubscribe.
The historical relationship (5% conversion) was true when emails were occasional. It broke down when your optimization changed the conditions.
Example 2. Now, let’s say you have a pricing analysis that uses historical price sensitivity data. It shows that a 10% price increase would improve profitability with minimal market share loss. But when you implement it, you lose market share faster than expected. Your competitors didn’t follow your lead. They held prices steady and gained your price-sensitive customers.
The historical relationship (low price sensitivity) was true when competitors’ prices moved together. It broke down when your action changed competitive dynamics.
Your historical data reflects an equilibrium where certain factors were constant. The statistical relationships you observe depend on that equilibrium persisting. When you change the system by acting on the data, you can change the equilibrium. This can then make all the historical relationships invalid.
In scalable problems, the system is large enough that your actions don’t change the equilibrium. Optimizing your supply chain doesn’t change supplier behavior industry-wide. Improving your operational processes doesn’t trigger competitive responses. Those are internal to your organization.
In non-scalable problems, your actions change the system. Entering a new market changes competitive dynamics. Changing your pricing changes customer expectations. Implementing your growth strategy changes what your competitors do.
Analyses based on historical data assume the equilibrium stays constant. This works for scalable problems where you’re small relative to the system. But, in some cases your actions will change the system. This then requires you to rely on adaptation rather than optimized plans.
Scale-ups are vulnerable to this. As companies grow from small to significant players, they cross a critical threshold. They become large enough that their actions change market balance. The Lucas Critique starts applying to decisions where it didn’t before.
When you’re a small startup entering a market, competitors don’t change their strategies in response to you. You can test approaches. You can optimize based on results. Your actions don’t trigger responses that invalidate your learnings.
When you’re an established scale-up entering a market, competitors notice. They respond. Your presence changes the competitive landscape. Historical patterns stop predicting future results. Your actions shift the equilibrium.
This is why sophisticated data-driven approaches that worked great during early growth can fail at scale. It’s not that the analysis or your analysts got worse. It’s that your company became important enough that the Lucas Critique applies. Your tweaks now change the system. You end up optimizing to a moving rather than stable target.
Diagnosing Scalable vs. Non-Scalable Problems in Your Organization
Signs you’re dealing with scalable problems:
Historical data predicts future performance with reasonable accuracy
Optimization efforts produce measurable improvements
Best practices from other organizations apply with minimal adaptation
Signs you’re dealing with non-scalable problems:
Each situation has unique characteristics that matter for success
Historical patterns don’t predict current performance
Local knowledge creates advantages that central analysis misses
Signs you’re applying the wrong approach:
Sophisticated analysis produces recommendations nobody believes will work (treating noise as signal)
Local innovations fail when scaled to other situations (trying to scale what’s non-scalable)
Experiments applied to problems with analytical solutions (defaulting to judgment when calculation works)
Building Dual-Method Organizations
There are a few specific approaches to help build organizations that excel at both scalable and non-scalable problems. (We discussed these in Growth Isn’t One Sided and here)
Use separate but connected systems for different problem types.
Scalable problems benefit from standardized analytical processes, centralized expertise, and systematic optimization approaches.
Non-scalable problems benefit from decentralized decision-making, local knowledge access, and experimental learning processes.
The key is creating connection between these silos. This connection let insights flow between both systems without forcing them to use the same approaches.
Operators and refiners need analytical systems that maximize efficiency.
Creators need experimental systems that maximize learning.
Harmonizers ensure both systems inform each other without undermining each other.
Use portfolio thinking to balance resources.
You need to think through scalable and non-scalable approaches from a combined sense, not an individual sense.
Take into account your industry characteristics, competitive environment, and organizational stage.
Match your resource allocation to your signal-to-noise environment.
Create information systems that distinguish between explicit knowledge and tacit knowledge.
Use dashboards, documentation, training programs, best practice libraries for explicit knowledge. This enables scaling what can be scaled.
Preserve access to tacit knowledge through networks, local decision-making, and direct customer access. This enables adaptation where scaling doesn’t work.
Don’t try to convert all tacit knowledge into explicit knowledge. Recognize that some knowledge loses its value when codified.
Use measurement systems that account for different logic of scalable versus non-scalable work.
Measure scalable work on efficiency gains and improvement. Success is optimization. Find ways to do the same thing better, faster, cheaper.
Measure Non-scalable work should on learning, adaptation, and option creation. Success is discovery and knowledge-sharing. You want to identify what works in this specific context then generate knowledge for future application.
Using scalable metrics for non-scalable work kills the experimentation you need. Using non-scalable metrics for scalable work prevents the optimization you should achieve.
The Harmonizer Advantage
The Harmonizer concept becomes essential when organizations need to coordinate between approaches.
Harmonizers excel at distinguishing between different problems. They’re able to know what needs optimization versus adaptation. They can recognize when analyses are appropriate versus when judgment works better.
They help translate between explicit knowledge systems (scalable) and tacit knowledge networks (non-scalable). This enables organizations to capture the benefits of both without letting either dominate.
They build portfolio approaches to resource use that invest in both types of problems. They make these decisions based on the types information available in specific challenges. Organizations often default to one approach or the other. More often than not they default to optimization-heavy approaches. They’re usually easier to justify. But, just because you can justify a choice does not mean it’s correct. They maintain the legitimacy of both approaches by demonstrating when each creates value.
Harmonizers know when to stop analyzing and start experimenting. They know that in high-noise situations, learning by doing provides better insight than learning by analysis. This prevents the “analysis paralysis” that kills the ability to adapt in fast-changing markets. This also prevents experimenting too much. Constant experiments can lead to wasted resources. Sometimes analysis is the best approach.
The Coordination Challenge
Understanding which problems are scalable versus non-scalable is foundational for effective organizational design. But most valuable opportunities require both approaches working together.
You need operators running scalable processes efficiently. Creators discovering non-scalable opportunities. Refiners optimizing the transition from discovery to scale.
These functions need to coordinate without undermining each other. Operators can’t force creators to adopt standardized processes for discovery work. Creators can’t prevent operators from optimizing proven approaches. Refiners need access to both creator insights (what’s possible) and operator data (what’s working).
But internal coordination faces hidden costs that grow as organizations scale. These costs don’t always explicitly appear on income statements but they drain effectiveness. Next week, we’ll explore why this coordination is so difficult. At it’s core, the Harmonizer role isn’t just about “improving communication.” It’s about solving transaction cost problems that make internal cooperation more expensive than it needs to be.
The Bottom Line
Economics explains why the distinction between scalable versus non-scalable is so critical in business.
Scalable problems with reliable signal rely on systematic optimization and analytical approaches. You can use calculation in these situations because you have enough samples to distinguish patterns from noise. More data helps. Historical patterns predict future performance. Optimization creates real value.
Non-scalable problems with high noise need adaptation and judgment-based approaches. You can’t use reliable calculation because you don’t have enough samples to distinguish signal from noise. More analysis of existing data doesn’t help in these situations. You need different information, not more of the same information. Experimentation generates the signal that analysis requires.
Understanding this distinction enables better resource allocation. You stop wasting analytical resources on uncertain problems. But, you also ensure predictable problems get the optimization attention they deserve.
The first step is being able to identify which problems are scalable versus non-scalable. Step two is applying the right problem-solving approach rather than defaulting to one method for all challenges.
This isn’t about a fight between data and intuition. It’s about using economics to match your decision-making method to your information environment. Use calculation where you have signal. Use judgment where you have noise. Then, build systems that preserve both and apply each where it creates the most value.

