Skip to main content
Jackson Bates

Who Owns When?

There are three people in the room, and they all think they own the timeline.

The Product Manager owns it because they know what needs to get to market. The Staff engineer owns it because they can see the dependencies and the technical risk. The Engineering Manager owns it because they know what the team can actually absorb.

All three are right. That's the problem.

In theory, the division of responsibility is clean enough. The PM owns what and why. The Staff engineer owns how. The EM owns the people - their workload, their wellbeing, their capacity to actually deliver. "When" falls out of that naturally, doesn't it?

In practice, "when" is where all three sets of concerns collide at once, and nobody has a monopoly on the answer.

Priority has business data behind it - market windows, sales cycles, the school year calendar that your customers plan their entire year around. Dependencies are technical facts. This thing genuinely cannot ship until that other thing exists, and the Staff engineer is usually the only person in the room who knows where those hard constraints actually are. But capacity? Capacity has a human element that the other two inputs don't. It can be pushed. It can be coaxed. It can, if the EM isn't careful, be quietly absorbed by the EM themselves.

That last one matters more than it might seem. I'll come back to it.

When the deadline becomes real #

A few years ago, my team was rebuilding our content authoring tool from scratch. The old one had done its job - it got the product to market - but it was naive and clunky in ways that had become genuinely limiting. So we set about replacing it properly.

At the same time, we were rebuilding the quiz display system - the infrastructure that renders what gets authored, in the student-facing and teacher-facing product. Two parallel rebuilds that had to converge. The authoring side was ahead; our content team had already been given access to a new question type the old tool couldn't support, and they'd started using it. The hat was already over the wall.

The display layer hadn't caught up yet.

What we hadn't fully surfaced into the planning conversation early enough was that there was a hard external deadline attached to all of this. Australian schools were cutting over to a new curriculum at the start of the school year. Not a soft target, not a stakeholder preference - an actual date the entire country's school system was moving to, whether we were ready or not. Our content team needed to have authored everything in the new format, and students needed to be able to see it, by then.

So there we were. Two systems that had to meet in the middle. Content already being created in a format the display layer couldn't yet render. A deadline defined by the national curriculum calendar, not by us. The capacity constraints were what they were. We couldn't add engineers. We couldn't move the date.

What we could change was our approach. We shipped without tests. We wrote them afterwards. It's not a pattern I'd advocate for or want to normalise - but the value was real, the deadline was real, and we got both systems across the line in time. The content team met their obligations. The students saw the new curriculum on day one of the school year.

The reframe wasn't "what can we cut?" It was "what can we do differently to get there?"

That distinction doesn't usually get made clearly enough.

When the scope is the problem #

The second story is more instructive.

We had an assessments feature in development - a proper one, not a stub. The proposal was ambitious: new feedback presentation, reworked results screens, retry settings, timer functionality, significant content system changes underneath. It had been designed with care by a PM and designer who understood the problem space well.

As the year progressed it became obvious we weren't going to ship it. The UX still had too many open questions. The technical complexity was real. The content rework was substantial. There was no version of this feature that was going to make it out the door in time to matter.

But "in time to matter" was the key phrase. We had a window approaching - the exam revision period, roughly two months before end-of-school exams, when self-directed students lean into the product hard. Highest traffic of the year for that cohort. A genuine opportunity to put something in their hands at exactly the moment they needed it.

We all got in a room. The question wasn't "can we hit the date?" It was "what's the thinnest thing we can ship that delivers real value by then?"

We went through the whole feature list. And the thing that unlocked it was almost the last item we looked at - the timer. We already had revision sessions. Students could already practise. What they couldn't do was simulate exam conditions with a countdown running. That was it. That was the thing.

We shipped a timer. Not the assessments feature. Not the reworked results screens or the retry logic or any of the rest of it. A timer that let students feel what it would feel like to work under pressure before the day actually arrived.

The engagement numbers in that revision window were the best we'd seen.

The full assessments feature still exists, by the way. It's coming. But the value we captured that year came from being willing to ask the smaller question - and being honest enough about capacity and complexity to admit that the bigger question wasn't answerable yet.

The thing I do instead of solving the problem #

Here's the part I said I'd come back to.

My team carries ongoing obligations that don't stop just because a sprint is full. Vulnerability remediation. Package updates. Pipeline maintenance. The kind of work that isn't glamorous, doesn't ship features, but has to happen - and has SLAs attached.

When the team is heads-down on something important, I tend to absorb that work myself. I pick up the package updates. I keep on top of the vulnerability queue. I do it so my team doesn't have to context-switch, so the feature work stays uninterrupted, so the deadline stays in reach. And here's what I've slowly come to understand about that: it's a workaround, not a solution.

A healthier system would have this work accounted for in the planning conversation - not quietly handled by the EM while everyone else looks the other way. When I absorb it, I'm doing something useful in the short term and something slightly dishonest in the long term. I'm making the capacity look more elastic than it is. I'm letting the system find its slack in me instead of naming the constraint clearly.

It eats into my people responsibilities. Some weeks it eats into them a lot.

I don't think I'm unique in this. I'd wager most EMs have a version of this pattern - some category of work they've taken on personally to keep the team's head above water, that's never quite made it into the conversation about what the team can actually absorb.

So who owns when? #

Nobody owns it cleanly. Priority, dependencies, and capacity are three legitimate inputs with three legitimate owners, and "when" is what you get when you try to synthesise them under pressure. But capacity is the only input with genuine agency in it. It can be defended, compressed, or quietly absorbed. The PM can't move the school year. The Staff engineer can't dissolve a real dependency. The EM, though - the EM can always find a little more give, somewhere.

That's the power and the risk of the role, sitting right next to each other.

The best thing I've found is to name the constraint before the pressure arrives. To say "here's what the team can actually absorb" clearly enough that it becomes a real input to the conversation, not just the thing that gets negotiated away at the end.

It doesn't always work. But it's the right question to keep asking.