Article

Why You Still Need a Human to Test Your Features

By Helen Worthington, Director and Head of Testing at More Than Monkeys

I’ve worked in testing long enough to know this: you can have a flawless build that still leaves your users lost, frustrated, or quietly giving up.

Automation is brilliant for checking whether your code behaves exactly as you told it to behave. But it’s terrible at telling you whether the thing you built actually makes sense to the people who will use it.

That second part, the “makes sense” test, is why you still need a human.

The Two Jobs of Testing

When teams talk about testing, they often mean “does the implementation match the spec?”. Unit tests, integration tests, end-to-end tests - all great for verifying that a requirement has been met.

But there’s another job: does the feature actually work for a human, in the real world, without a developer standing over their shoulder explaining it?

That’s not something you can leave until the end. You need to bring that user mindset into the build early and often.

Using the Product Like a User

I’m not talking about clicking a few buttons and calling it done. I mean sitting down and pretending you are that parent, that nurse, that tradesperson, whoever your product is for, and using it the way they would.

That means:

  • Forgetting what you know about how it was built.
  • Starting from a clean slate, as if you’ve just downloaded it or logged in for the first time.
  • Asking yourself: If I knew nothing about this, would it still be obvious? Would I trust it? Would I keep using it?

Do it as soon as there’s something testable. Even a rough prototype. Because the earlier you spot the “this doesn’t make sense” moments, the easier (and cheaper) they are to fix.

The TopTekkers Lesson

One project that stuck with me was TopTekkers - an app for children and their parents to track and improve football skills.

The implementation was flawless. All the features worked exactly as designed. Every automated test passed. But when we put it in front of parents and kids, it became clear we’d made some assumptions that didn’t hold up in the real world.

Parents didn’t understand the onboarding process without an explanation. Kids struggled to find the training videos they were looking for. Nobody spotted these issues in the spec because, on paper, it all made sense. It was only by actually using the app in the context it was designed for that we realised we’d built something that was technically perfect, and practically confusing.

We fixed it, but it was a painful reminder that real usability only reveals itself when you step into the shoes of the people who will use it.

Do It Early. Do It Often.

Manual testing with a user mindset isn’t something you save for the final “polish” phase. It’s a constant loop: build a little, test like a user, learn, adjust, repeat.

This approach catches more than just bugs. It catches:

  • The button that works but is in the wrong place.
  • The workflow that technically follows the requirements but is too clunky to use daily.
  • The default settings that make sense to the dev team but are meaningless to the end user.

The more often you do this, the more natural it becomes - and the less chance you have of shipping something that works perfectly but still fails in the wild.

The Cost of Skipping It

Skip this step and you’ll find out the hard way - through support tickets, churn, and low adoption. By then, fixing the problem is far more expensive than catching it before it leaves the building.

Make It Part of the Process

Don’t leave “real human testing” as an optional extra. Bake it into your definition of done. Involve testers and non-testers alike. Rotate the role so everyone occasionally gets to experience the product fresh.

Automation will keep your code healthy. Humans will keep your product honest. You need both.

Before you hit deploy, make sure someone - a real person, thinking like a real user - has taken it for a spin. It could be the difference between a feature that lives and a feature that dies.

Helen Worthington is Director and Head of Testing at More Than Monkeys, where she helps teams deliver software that not only works flawlessly but also makes sense to the people who use it. She writes about practical testing strategies, user-centred quality, and building products that succeed in the real world.

Through More Than Monkeys, Helen works with startups and scaleups to embed testing into every stage of delivery - from early prototypes to production-ready releases - ensuring products are both technically sound and genuinely usable.

Visit morethanmonkeys.co.uk to learn how we can help you ship features your users will love.