Virtual Assessment Centres 2.0

Scaling Collaboration, Fairness, and Efficiency with Qpercom

When COVID forced recruitment online almost overnight, the virtual assessment centre became a necessity rather than a choice. Organisations scrambled to replicate in-person processes through video calls, shared spreadsheets and hastily adapted scoring sheets. It worked – just about. But anyone who ran those early virtual assessment centres knew the cracks were showing.

Four years on, we’re entering a genuinely new era. Not virtual assessment centres as a compromise for when you can’t meet in person, but virtual assessment centres as a superior model in their own right. At Qpercom, we’ve been building toward this for a long time, and I want to share what that evolution actually looks like from the inside – both as a UX designer shaping the product and as a project manager working alongside the organisations that use it.


From Digital Workaround to Deliberate Design

The first generation of virtual assessment centres had a fundamental problem: they were analogue processes wearing digital clothes. Scoring still happened on paper or in Excel. Assessors still had to reconcile marks manually after the fact. Observers joined video calls without a clear structure for what they were supposed to be watching. The technology was present, but the process hadn’t been reimagined to take advantage of it.

What we mean by Virtual Assessment Centres 2.0 is something quite different. It’s a rethinking of the entire workflow – from how assessors are briefed, to how evidence is captured in real time, to how decisions are made and documented – with digital-first design at every stage.

At Qpercom, that means building a platform where structure and flexibility coexist. Assessors score candidates against predefined competencies the moment an exercise ends, without waiting for a wash-up meeting. Scheduling logic handles the complex rotation of candidates across exercises and assessors automatically. And everythingscores, notes, flags, evidence – flows into a single shared environment that the whole assessment team can see in real time.


The Collaboration Problem Nobody Talks About

One of the most underappreciated challenges in running a large-scale assessment centre is coordination. On a typical day you might have fifty candidates, ten assessors, five different stations and a handful of senior stakeholders observing. Getting everyone to the right place at the right time, briefed on the right candidate, scoring against the right criteria – that’s a significant operational undertaking before anyone says a word.

In a physical centre, you solve this with clipboards, colourcoded schedules pinned to walls and a coordinator with a very loud voice. Online, those mechanisms fall apart.

What we’ve focused on at Qpercom is making coordination invisible. The platform handles the scheduling matrix, sends assessors and candidates to the right assessment rooms, and surfaces the right scoring forms automatically. From a project manager’s perspective, this is transformative. The cognitive load of running an event shifts from logistics – “am I in the right place, do I have the right form?” – to the actual work of assessment.

The collaboration piece goes deeper than scheduling. When assessors score independently in real time, you create a genuinely richer picture of each candidate. There’s no anchoring effect from the loudest voice in the room. Assessors commit to their ratings before any discussion takes place, which is one of the most powerful things you can do to reduce bias in group decision-making. The wash-up conversation then becomes what it should always have been: a structured, evidence-based discussion rather than a negotiation between strong personalities.


Fairness Isn’t a Feature, It’s a Design Principle

We feel strongly about this at Qpercom, and it’s something we debate a lot internally. Fairness in assessment isn’t something you bolt on at the end by running a diversity report. It has to be designed into the process itself.

That means a few things in practice.

Structured scoring matters enormously. When assessors are guided to score against specific behavioural indicators rather than general impressions, you reduce the surface area for unconscious bias. The platform design can either support this – by surfacing the right criteria at the right moment – or undermine it, by making it easy to skip to a summary rating without engaging with the evidence. We design for the former, deliberately and obsessively.

Then there’s data. One of the significant advantages of a well-built digital platform is that you can see patterns that would be invisible in a paper-based process. Which assessors are your hawks and which are your doves? Where are the inconsistencies between exercises? That visibility is the first step toward addressing them.

Accessibility is part of fairness too. When we were first introduced to WCAG principles, it was an adjustment after the fact. Now our team builds these standards into the design of our platform from the outset – a significant step forward in ensuring we meet a wide range of accessibility needs.

I’ll be honest: this is still an area where there’s more work to do across the industry, including for us. But the infrastructure for fairer assessment exists now in a way it didn’t five years ago, and that’s genuinely exciting.


Efficiency at Scale and What It Actually Looks Like

Let me give you a concrete sense of what efficiency gains look like in practice, because the abstract case for “digital transformation” can obscure what actually changes for the people running these programmes.

Before a well-structured virtual assessment centre platform, a large graduate assessment day – say, sixty candidates – might involve a programme manager spending two full days building the schedule, a further day briefing assessors and another day or two compiling results after the event. The wash-up meeting alone could run to hours as people cross-referenced their paper scoring sheets.

With Qpercom, the scheduling logic runs in minutes. Assessors are briefed through the platform, with exercise materials, scoring guides and candidate information all in one place. Results compile automatically as scores are entered. The wash-up becomes a focused discussion of borderline cases rather than a data reconciliation exercise.

That’s not a small thing. It means organisations can run more assessment centres, assess more candidates and make better decisions – without increasing the burden on their teams. For in-house talent acquisition teams who are often stretched, that efficiency isn’t a nice-to-have. It’s the difference between an assessment strategy that’s sustainable and one that quietly erodes under pressure.


What Assessors Actually Need (And What We Got Wrong)

I want to be candid about the design journey here, because there’s a tendency to present product development as a clean march toward the ideal solution. It isn’t.

We’re a constant work in progress and continually seek feedback from all user roles – not just the admin teams on the client side. Our design decisions are grounded in user research, and we look to improve consistently.

Our clients also come to us asking for new functionality they need. A recent example is our Evidence Verification add-on. Applicants can upload digital evidence to be scored by assessors, either as a self-score that is then verified or as part of a multi-mini interview discussed during a video session.

The lesson is an obvious one in UX, but worth restating: the best tool is the one people actually use, not the one with the most features. Simplicity in assessment software is incredibly hard to achieve because the underlying frameworks are genuinely complex. But it’s the right thing to fight for.


The Human Element

There’s a concern I hear fairly often, and it deserves a direct response: does virtualising the assessment centre reduce it to a purely transactional, dehumanised experience for candidates?

My honest view is that it depends entirely on how you design it.

A poorly run virtual assessment centre – where candidates spend the day toggling between anonymous waiting rooms, receiving no introduction to the organisation and no human contact beyond perfunctory instructions – can absolutely feel cold and dispiriting. That’s a design failure, not an inherent property of the medium.

A well-run virtual assessment centre, by contrast, can feel just as engaging as an in-person event. The key is intentionality about the candidate experience: a warm welcome session, clear communication about what to expect, exercises that are genuinely interesting and challenging, and human interaction built into the day rather than treated as an afterthought.

Some organisations are also finding that the virtual format is more accessible for certain candidate groups – those with disabilities, caring responsibilities or geographic constraints who might have struggled to attend an in-person centre. That’s not a trade-off. That’s an improvement.


Where We’re Heading

The honest answer is that the gap between what’s technically possible and what’s widely deployed in assessment is still quite large. There are things we’re actively working on at Qpercombetter integration between assessment outcomes and broader talent data, more sophisticated reporting for hiring managers, and continued improvements to how we support assessors throughout the scoring process – that will meaningfully shift what organisations can do with their assessment programmes over the next few years.

Virtual Assessment Centres 2.0 isn’t a product version number. It’s a mindset shift – toward assessment that’s more structured, more collaborative, fairer and more efficient than what came before. We’re not there yet, fully. But we’re building toward it every day.

Posted in Blog
Let's Begin
Book a Demo
Who we work with
Our Clients