Skip to content

The dangers of product management

30-Apr-19
Reading Time: 6 minutes

I’ve been thinking a lot about the role of product management in engineering organizations lately.

Three things are driving my thinking. First, I’ve read a number of articles about technology companies going off the rails because of product decisions. There are stories about Facebook’s product organization pushing ahead with projects despite internal security concerns. And stories about YouTube playing whack-a-mole with “bad content” because they let engagement drive the product instead of safety.

Next, I’ve followed all the engineering details of the Boeing 737 MAX accidents and subsequent groundings. This is a classic case of business needs vs. engineering needs. The push for bigger and more efficient engines on the 737 MAX to meet market demand caused the use of some risky software engineering (i.e. taking humans out of the loop).

Finally, I’m leaving my product management job after 2+ years. I’m returning to engineering. With that transition I’m reflecting on the value product brings to an organization and the risks it hides.

I’ve come to the conclusion that product organizations are dangerous for engineering and the world at large. This was solidified by this quote from Mary Poppendieck:

“Managers (often lacking coding experience or an engineering background) decided that it would be more efficient if one group of people focused on designing software systems while another group of people actually wrote the code. I have never understood how this could possibly work, and quite frankly, I have never seen it succeed in a seriously complex environment.”

I read the article about YouTube in last week’s NYT business section. That article focused on business decisions YouTube made that increased usage and engagement. Those business decisions are driven by the needs to support advertisers. But ultimately the business decisions that keep users watching and engaged create risks. To keep users engaged you need to recommend relevant videos for them. Algorithms operate like rubber-necking during a traffic accident. People can’t help but watch the bad stuff. The risks are that you recommend videos that are ultimately damaging to both the viewer, the advertiser and the brand.

Why did this happen? It happened because of the disconnect between product and engineering. Engineering is working hard to meet product requests. But they are often in the dark about the full scope of what’s going on. Not purposefully in the dark. Just in the dark because they are one-step removed from the needs and the risks. And product people rarely think about the risks. They tend to focus on the opportunity while discounting risks. So YouTube engineers may have seen some of the risks but not had the power to change paths because that would diminish the business opportunity. Of course, it’s possible that no one saw the risks. Some companies are oblivious to the risks they create when they are external to the company. YouTube cares about user-generated videos and advertisers. Violence, pornography, what people do in the real world, are all external to YouTube.

A great example of this disconnect comes from this quote from the one of the engineers who worked on the recommendation engine. From a Bloomberg article about the challenges at YouTube.

‘Paul Covington, a senior Google engineer who coauthored the 2016 recommendation engine research, presented the findings at a conference the following March. He was asked how the engineers decide what outcome to aim for with their algorithms. “It’s kind of a product decision,” Covington said at the conference, referring to a separate YouTube division. “Product tells us that we want to increase this metric, and then we go and increase it. So it’s not really left up to us.”’

There’s a similar set of stories coming out of Facebook around tradeoffs between security and advertising-fueled growth. Facebook’s business side wants to help advertisers and build product features to support them. But when those features have security and privacy implications it gets lost between product needs and engineering needs.

“Many of the changes that are being put in place to clean up the Facebook platform will be expensive and could have an impact on growth, putting a brake on the ad-revenue machine that Ms. Sandberg built. In July, when Facebook reported that a surprise slowdown in revenue growth for the second quarter was likely to continue along with an unexpected increase in costs for security and privacy, investors shaved almost $120 billion in value from the company’s valuation—the biggest one-day loss ever for a U.S.-listed company.”

Next, the Boeing 737 MAX. This was classic case of bypassing engineering limits via piecemeal changes. But it’s also a case of business decisions driving software functionality that’s disconnected from the overall system.

The business drivers behind things were increasing competition and a growing reliance of software solutions when going back to the drawing board isn’t an option. The competition wasn’t direct. There’s no one selling a direct 737 competitor. The competitive pressure was more Darwinian where airlines wanted to reduce fuel usage, avoid retraining pilots, and standardize their fleet as much as possible. This lead to putting bigger engines on the 737 MAX to increase fuel efficiency (bigger engines burn hotter and are thus more efficient). But the bigger engines caused the flight profile of the plane to change. The engines had to be mounted a bit forward on the wings. And in that forward location the engines caused the plane to tip up when thrust was applied. It’s kinda like having engines pointing a bit toward the ground. When they fire they push the nose of the plane upward. Well, plane designers are always worried about planes pointing their noses too high. It leads to a stall because a plane needs forward motion (not upward) to cause lift for the wings. So, rather than change the plane profile they put in place software that would force the nose down when it sensed too much forward pitch. This took humans out of the loop and caused the plane to crash when it got faulty readings of the pitch.

Finally, my own journey. I switched from engineering to a product team after an acquisition. I’ve always felt like an outsider in engineering organizations. I’m quick to try things. I have a very experimental (and experiential) mindset. This is not to say I fly by the seat of my pants. I’m a member of IEEE Computer Society and ACM. I consider myself a professional who follows codes of ethics in the profession. One of my favorite books is Normal Accidents by Charles Perrow. I’m always on the lookout for the risks in software engineering situations. But my experimental mindset was often at odds with more cerebral-minded engineers. I would rather see the results of the experiment than talk about them. I’m a skeptic. So much that I don’t even trust my own thought experiments. That “move fast, learn, adapt” mentality meant I didn’t always find camaraderie in an engineering-only team. Plus, I quickly understand business needs. (This not to say that engineers don’t understand business needs, the best ones work hard to understand them – but it’s not the first nature for an engineer) So it was a natural progression for me to work in a product organization. Product organizations work to translate business needs into software solutions. While product people don’t actually build the software, they do help shape it and keep teams focused on delivering a solution.

But I learned that (some) product people are often just glorified project managers. They are responsible for timeline commitments and thus are willing to trim things to meet a deadline. Or at least the pressure is always there in a product group. This pressure comes out in the form of the question, “What scope can we cut?” The problem is that cutting scope always has consequences. And in complex systems those consequences don’t emerge for quite some time. And they can be disastrous. Literally plane crashing.

Product people are also responsible for asking business value questions. Questions like, “Who will pay for this?”, “What’s the total market size?”, “Is that feature really valuable?” These are important questions in the context of running a business. But when you task one group with all the business decisions and another with all the engineering decisions you create a false dichotomy. And, as is common in large organizations, splitting these functions between departments creates winners and losers. The reality is that you need to balance business needs and engineering needs. Balance is fundamentally different than winners and losers. It means that you sometimes invest in expensive engineering because it increases safety and resiliency. It means you sometimes move quickly (not sloppily) to understand a market without engineering the best solution.

I’ll be forever suspect of organizations that separate product and engineering into different reporting hierarchies. There are too many risks in that organizational design. It’s better to have one team responsible. That single team is less likely to take risks that would be paid for by other teams. That single team will care for the software in a better manner than any collection of teams with opposing goals.