A couple of posts have hit my LinkedIn feed about the dreaded feature factory, a term almost always meant as derogatory. It’s become a catch-all term for product creation teams (narrowly defined as product, design, and engineering) that value output over outcomes or lack rigor in their development of customer insights. In recent years it’s become a blunt object wielded by frustrated leaders who see a better path but can’t seem to implement it. I freely admit to past bouts of blaming feature factory mentality for processes and organizational models that simply need improvement. There are of course organizations that have fully failed to prioritize learning and managed to succeed anyway. I’m often contacted by leaders in these organizations to diagnose the issue.
There are times when an organization that seems to be a feature factory simply isn’t.
I start with these key questions:
Does the team know why we’re building what we’re building?
Are we able to measure the impact of what we’re building?
Are we actually measuring/getting feedback on what we’re building?
Do we know what our measurements/insights mean?
Do we change our future builds based on the insights we gather?
If the answer to these questions is “yes,” you’re probably OK. What’s striking is that many orgs that receive complaints of feature factory mentality are able to answer in the affirmative. There are times when an organization that seems to be a feature factory simply isn’t. Both posts I came across discuss how this can be the case.
Strategic Factories
In his post John Cutler describes the “just ship” scenario thusly:
Some businesses do just fine as feature factories (for some time, at least). This is especially common in situations where the solution space is fairly known, and there is a premium placed on "catching up" to legacy competitors.
He then provides the tradeoffs you face when deciding to “just ship”:
When do you invest in more outcome-centricity, experimentation, measurement, and innovation (for lack of a better word)? And when do you go into feature factory mode and "just execute"?
Meanwhile, all those features are piling up, and your team has to start maintaining the feature soup you've expertly cooked. You're at risk of resembling the crusty incumbents you intended to disrupt.
Quick aside: if you’re not reading John’s publication The Beautiful Mess, you should be)
I’m attracted to how easily this take can be flowcharted out for leadership teams: consider the downstream impacts of the decision to execute on new features and weigh that against using that bandwidth to learn more about the space. Pretty simple strategy stuff.
John’s take assumes a level of organizational maturity: we have signal, customers, and we’re making a run at competitors. We understand the space because we’ve been in it for some time. This isn’t about building differentiators, it’s about building beyond them. But if you happen to be a new startup without much traction, what good is measurement?
Early Factories
Matthew Zammet addresses the justification for feature factory-ish teams in early startups in his post:
You need to build a lot to build the basis of the product or to get to par with the other players in the market. Then seeing what really works. And there’s always something going wrong which needs to be fixed.
He continues with reasoning for de-emphasizing measurement:
You have very few clients (if any!) in the beginning. You can’t really measure most of the things and there’s no significant base of [ideal customers]. As you get more ideal customers and the volume increases then you can start measuring with data which has statistical significance.
This follows my experience in the early startup world. It’s easy to get stuck looking at meaningless data or obsessing over vanity metrics to justify a specific direction. In the early days it’s the responsibility of product leaders to keep the team focused on reaching product market fit at all costs. Customer interviews and qualitative feedback can be helpful, but insights are only valuable if you know who your ideal customer is. Following red herrings is an expensive risk when we create something new. Sometimes trusting your gut and “just shipping” is the only way to make the requisite progress to engage with the customers we ultimately need to attract.
Not Factories
I do think that Matthew is making a bit of a straw man argument here, though: feature factory mentality implies that we are focusing on output over outcomes. In the case of an early startup, outcomes simply take longer to measure. Teams that factor their constraints into product prioritization are attempting to shorten the time-to-learning. That’s best practice and hardly a feature factory.
Founders often get lost chasing their tails trying to implement a lean startup build-measure-learn loop without realizing there is a minimum amount of measurable insight required to inform the next build cycle. Similarly, there’s an amount of product above which we can expect customers to interact in a way that can yield insights (minimum…viable…ok I’ll stop). Early startups move fast, but without meeting these thresholds they cannot expect to make data-driven decisions. It makes sense to build towards MVP without assuming success criteria is anything beyond “it works as intended.”
The accumulated lack of insight and inability to see historical trends in data forces Product to make uninformed decisions.
But that doesn’t mean you can skip the investment in data and research at an early stage. That’s exactly how feature factories are formed: early startups build and build and build, find product market fit, keep growing, and have no idea why. The accumulated lack of insight and inability to see historical trends in data forces Product to make uninformed decisions. The investment in analytics and user research is seen as too expensive because “we’re growing with out it,” and the feature factory keeps churning out new stuff, creating exponentially more complexity and exacerbating the problem.
Conclusion
They say that undergrad is where you learn the rules and graduate school is where you learn nothing actually follows those rules. In the case of feature factories, it’s easy to paint with a broad brush and say that any organization that prioritizes shipping over tight iterations fits the definition. In reality it’s much more about organizational awareness: if we understand the tradeoffs and focus on shortening our time to learning, shipping lots of stuff doesn’t present a problem. It’s when we deprioritize learning as a whole when we run into trouble.
We can avoid this by defining what we hope to learn at the macro scale and referring to it as we prioritize. If we’re building features to “catch up” with competitors, how will we know when we’re there? If we’re building an MVP of a new product, what does that product look like? Everything we build should be considered within these constraints. We can add a “does this contribute to our eventual learnings?” column in our prioritization exercises. We can hold regular retrospectives on whether our thesis or activities should shift based on new information. And critically, we can invest in a future where learning comes more quickly. Stand up analytics and teach the value of speaking with customers. Learn when and how insights are more valuable than new features. And look for the moment when we should make the switch.
Sam Gimbel, formerly VP of Product at Clover and co-founder of Clark (acquired 2019), is a consultant for product organizations of all sizes. He enjoys asking “why?” as frequently as possible.