34:24 Lena: Miles, let's talk about the mistakes that trip up even well-intentioned product teams. What are the biggest pitfalls you see when it comes to product metrics and analytics?
34:35 Miles: Oh, where do I even start? One of the most common mistakes I see is what I call "dashboard addiction." Teams build these beautiful, comprehensive dashboards with dozens of metrics, and then they spend all their time looking at them instead of taking action based on what they see.
34:51 Lena: So it becomes a form of procrastination disguised as being data-driven?
1:53 Miles: Exactly. And related to that is metric proliferation—tracking everything just because you can. I've seen companies with literally hundreds of metrics being tracked, but nobody can tell you which ones actually matter for business decisions.
35:09 Lena: How do you prevent that kind of metric overload?
35:12 Miles: I always recommend the "so what" test. For every metric you're tracking, you should be able to answer: "If this metric changed significantly, what specific action would we take?" If you can't answer that question, you probably don't need to track that metric.
5:28 Lena: That's such a practical filter. What other pitfalls should teams watch out for?
35:30 Miles: Another big one is mistaking correlation for causation. Just because two metrics move together doesn't mean one causes the other. I've seen teams make major product decisions based on spurious correlations that led them completely astray.
35:43 Lena: Can you give us an example of what that might look like?
18:10 Miles: Sure. Let's say you notice that users who receive email notifications have higher retention rates. You might conclude that sending more emails will improve retention. But it could be that engaged users are more likely to opt into notifications, not that notifications make users more engaged. The causation runs in the opposite direction.
36:03 Lena: So how do you test for actual causation rather than just correlation?
36:07 Miles: Controlled experiments are really the only way to establish causation. In that email example, you'd randomly assign users to receive notifications or not, then measure the difference in retention between the groups. That isolates the causal effect of notifications.
36:21 Lena: What about sample size issues? I imagine a lot of teams jump to conclusions based on small amounts of data.
36:27 Miles: Oh, this is huge. I see teams making decisions based on a few dozen users or a couple days of data. Statistical significance matters, and so does practical significance. Even if a change is statistically significant, it might not be big enough to matter for your business.
36:41 Lena: How do you help teams develop intuition about what constitutes sufficient data?
36:46 Miles: I always recommend learning the basics of statistical significance testing. You don't need to become a statistician, but understanding concepts like confidence intervals and statistical power will make you a much better data consumer. Most analytics tools now build these concepts into their interfaces.
37:01 Lena: What about the opposite problem—waiting too long to act because you want more data?
37:05 Miles: That's analysis paralysis again, and it's just as dangerous as acting on insufficient data. The key is matching your confidence requirements to the stakes of the decision. For reversible decisions with low downside, you can act on less data. For irreversible decisions with high stakes, you want more confidence.
8:13 Lena: That makes sense. What other common mistakes do you see?
37:24 Miles: Ignoring external factors is a big one. Teams will see a metric change and immediately assume it's due to something they did, when it might be seasonal effects, competitor actions, market changes, or even technical issues.
37:35 Lena: How do you account for those external factors in your analysis?
37:38 Miles: Always look at your metrics in context. Compare to the same period last year for seasonal effects. Check if competitors launched something new. Look at your technical monitoring to rule out performance issues. And consider broader market trends that might affect user behavior.
37:51 Lena: What about gaming the metrics—when teams optimize for the metric instead of the underlying business outcome?
37:56 Miles: Oh, this is such a pervasive problem! I've seen teams boost their "engagement" metrics by making their product more addictive rather than more valuable. Or increase sign-up rates by making the process easier but sacrificing user quality.
38:08 Lena: So the metrics start driving behavior in unintended ways.
1:53 Miles: Exactly. This is why it's so important to have a balanced set of metrics that capture different aspects of success. If you only measure sign-ups, you'll optimize for sign-ups at the expense of everything else. You need to also measure activation, engagement, retention, and satisfaction.
38:26 Lena: What about the technical pitfalls? What mistakes do teams make in how they actually collect and process their data?
38:31 Miles: Data quality issues are huge. Inconsistent event tracking, missing data due to technical bugs, incorrect attribution—these can completely undermine your analysis. I always recommend building data validation into your analytics pipeline from the beginning.
38:45 Lena: How do you catch data quality issues before they affect your decisions?
38:48 Miles: Set up automated alerts for anomalies. If your daily active users suddenly drop by 50%, you want to know immediately whether that's a real user behavior change or a tracking bug. Also, regularly audit your data by comparing it to other sources of truth.
39:01 Lena: What about privacy and compliance issues? Those seem like they could create both technical and legal pitfalls.
1:08 Miles: Absolutely. Collecting data you don't need, failing to get proper consent, or not anonymizing sensitive information can create serious legal and ethical problems. The good news is that privacy-conscious analytics practices usually lead to better data quality anyway.
39:20 Lena: How so?
39:21 Miles: When you're thoughtful about what data you collect and why, you end up with cleaner, more focused datasets. You're less likely to have data quality issues because you're being intentional about every piece of information you capture.
39:30 Lena: What about organizational pitfalls? How do teams set themselves up for failure from a process perspective?
39:36 Miles: One of the biggest is not having clear ownership of metrics. When nobody's specifically responsible for data quality or metric definitions, things get inconsistent quickly. Different teams start calculating the same metrics differently, and suddenly nobody trusts the numbers.
39:48 Lena: So you need both technical infrastructure and organizational processes to support good analytics.
1:53 Miles: Exactly. And you need regular reviews of your metrics and definitions. As your product evolves, your measurement approach should evolve too. What made sense six months ago might not make sense today.
40:02 Lena: This is really helpful for thinking about all the ways analytics initiatives can go wrong. Any final thoughts on avoiding these pitfalls?
40:09 Miles: Start simple, be consistent, and always connect your metrics back to real business outcomes. Most of these pitfalls come from losing sight of why you're measuring things in the first place. If you keep that purpose front and center, you'll avoid most of the common mistakes.