
High-level Test Planning – The Content
June 17, 2025
Today I want to talk about test planning, and specifically about high-level planning. This is a continuation of this blog post, when we started talking about test planning, in general. I suggest reading it first.
Anyway, long story short – we want a test plan, but not waste a whole bunch of time on it. Something useful, that everyone agrees on, to help us start working. Without running around all over the place, not completing anything. And when things change, we rather not lose our way.
We can create such plans with different detail levels and resolutions. Depending on how long we’re planning for, we can really go deep with those details. We can create plans for a quarterly plan, or for a 2-week sprint planning. And each can have their own detail level.
A high-level test planning is more a declaration of guidelines, and it can work in any resolution. The cool thing is that for short releases, like a sprint, it’s a 5 minute work or less.
Regardless, the planning session ends with everyone agreeing on the topics.
The main topic is the content of work – what we’re going to work on. We can go into details here, from features, to workflows, capabilities and so on.
Test Planning – What Goes In There?
But you’re probably thinking…

We’re adding 5 new features in this release, so, naturally, we want all of them tested. Oh, and the rest of the features we developed until now, should continue to work.
Easy. End of discussion, right?
That’s a nice wish list. But if you want a good plan, you’ll need to work a bit harder. Because time will run out eventually, and we’ll always have more things left to test.
Instead of hoping we can do everything, we need to define the most important things to do first.
Seems obvious, I know. But good test planning goes deep.
Let’s say you have enough time, you think, to test those 5 features. But, you may not have a good automation regression suite. That means you’ll need to do a manual regression session. To fit in time you think you’ll have, you will need to choose what to focus on, from both the new and the old features.
Even if you do have an automated suite, you still don’t have real full coverage. Also, you may be able to cover the new features manually, but may not be able to cover all of them with automated scripts.
You’ll always need to choose and prioritize, because at some point, time will run out. And, there are going to be bugs, and repeated tests. We just need to accept that we won’t get to everything.
Once we accept this, we can prioritize. Prioritization is critical, because if we won’t complete everything, it’s better to complete the most important tasks.
Let’s talk about prioritization
“Everything is in high priority” is never an acceptable answer. It doesn’t even sound serious, as if people have never been in this cycle before.
Some workflows are more important than others. It’s the product manager’s job to prioritize, and their feedback is crucial here too.
If it’s important, we’ll do it before other things. Everyone needs to know what to do first, before moving on to the next, less important, items. That includes breaking down and prioritizing, different scenarios of the same features.
Suppose you’re adding a new type of document type to the application. That’s the new features. But these documents come linked with different scenarios, that we’d probably want to test.
If we don’t have time, we’d rather test the creation of the new docs, before moving on to testing deleting them. Whole big features are hard to really cover. When you break them down, you’ll always discover different scenarios, and some of them are more important than others.
And you need to this enough to close the subject, to get to the agreement. Remember – everyone in the meeting should agree on the scenarios and priorities. When there’s an agreement, we have the beginning of a plan.
But we’re not there yet.

I’d like to introduce “the musts“. We agreed on what to tackle, but inside the list, there are scenarios that must work. Others can work ok. Or Maybe not at all.
“The musts” tell us where the quality bar is at.
We need to qualify what scenarios must work, and from what worked before, what mustn’t break.
And if that’s not enough, we need to also define what “works” means.
For example, creating a document is important. So in the last version, we tested it thoroughly. But in our new version, there’s a new API version, with a couple of new fields. We’re still supporting the old API, with fewer fields.
The “must” here is that adding a document must continue working for the old version of the API, and the new one must work too. Both must store information in the new schema. Since we’ve already discussed that deleting new docs is of lower priority, there’s no “must” here to discuss.
The “must” acts as a pass/fail criteria release. The product cannot be released without them. If we can release without meeting that criteria, it’s not a “must“.
It may be tempting to skip this step, but shining a light on what must work, focuses our attention as we’re testing, and making sure that the agreed quality is met.
It can also be tempting to say that everything we deem important is a “must“. But this is missing the point. If the “musts” are an agreed-upon quality bar, and we lower that bar later because time ran out, then the “musts” don’t have any meaning. We should specify fewer real “musts“, to make it easy to stick to them.
Thinking, and stating what is really important, in what order to work, and what is the release quality – it’s a bit of work, true. But it creates the guidelines for the next release, regardless of size. And – everyone agrees and works on the important things first.
That’s not all folks, but it is for today. We’ll continue talking about high-level test planning in the next posts. Until then, check out the API Testing Workshop. We do some planning there too.