Evolution of the candidate report

Summary

Mission

Arctic Shores are a psychometric assessment provider. Offering hiring teams the opportunity to sift candidates based on their assessment score via a distribution platform or directly via their ATS.

By delivering clear, task-based insights and automated candidate feedback, Arctic Shores reduce the burden on hiring teams, improve efficiency, and empower candidates with confidence and clarity throughout the hiring process.

Responsibilities

As senior UX manager I led and oversaw all work across the candidate feedback report. This included discovery, user research and designs.

I oversaw the design and launch of two updated candidate feedback reports, as part of the continued iteration of the product and available features.

Helping to add value to the customer by reducing direct candidate feedback and alleviating candidate concerns regarding the testing process and results.

Organisation

Arctic Shores

London/Manchester, UK

Website / LinkedIn

Role

Senior UX Manager

UX Design,

Interaction Design,

UI Design,

Product Strategy,

Usability Testing.

Impact

Both initial iteration from PDF to web application produced great feedback from both customers and candidates.

4/5

Candidate approval rating

2million +

Candidates served feedback

1000s hours

Reducing customer onus

Value

Saved by reducing customer onus

Background and problem space

Arctic shores were moving from an extremely bespoke assessment creation system to a self-serve software as a service (UNA).

As part of the value-added service provided by Arctic Shores, candidates received a feedback report outlining how their skills aligned with the requirements of a role.

The existing candidate feedback report was developed through extensive collaboration between customer hiring teams, Arctic Shores business psychologists, and internal stakeholders. While thorough, this approach was based on an extremely bespoke system of 40+ datapoints.

The UNA platform was built to replace the complexity and cost of Datahub. The change in datapoints and approach to scoring also required changes to the candidate feedback report.

The legacy Datahub approach

Bespoke Assessments

Highly customised assessments created in collaboration with Arctic Shores business psychologists, customer subject matter experts, and hiring teams.

Benefits

  • In-depth results

  • Expert advice

  • High role relevancy

Limitations

  • Not scalable

  • Dependent on Arctic Shores psychologist availability

  • Expensive for customers and the business

  • Time-consuming to deliver

  • Excessively complex for many roles

  • Lengthy candidate experience

The UNA approach

Personality Trait Assessments

Customer-configured, personality-based assessments built from a potential of 13 personality and 3 working intelligence (cognitive) traits.

Benefits

  • Scalable

  • Customer-led selection

  • Cost-effective after initial setup

  • Immediately available

  • Based on personality based traits

Limitations

  • Less specificity

  • Reduction in expert advice

  • Lengthy candidate experience

Structure of original report

The original report was delivered as a downloadable, multi-page PDF distributed via email. The document followed a linear format, beginning with introductory content and progressing through descriptions and scores for a range of personality traits.

Scores were represented on a ten-point scale, positioned between two opposing behavioural poles.

Title page
An introduction and welcome to the candidate.

How to interpret your profile
An introduction to how the scores were collected from the candidate application.

How this report is structured
Brief description of how the report will present candidate scores and further details.

Interpreting radial results
How the candidate scores will be presented.

Example
An example score and description of results.

Things to remember
Frequently requested details regarding the assessment and results.

Your profile

Title
Each personality trait

Description
A brief description of the personality trait

Score

Score represented in a scale separated over 10 steps, presenting likelihood to have polar approaches to each trait

Summary
A brief summary of how this trait score may affect how the individual works.

Research

Original report: Key feedback

Feedback was received from customer conversations, customer success managers and candidate feedback surveys.

Candidate response

Feels professional

Lots of information

Traits not applicable to the role

Didn't agree with the results

Scale seemed arbitrary

PDF only

Hard to read on phones

Difficult to navigate easily

Customer response

Acts as feedback

Difficult to find through platform

Candidates ask about traits

Scale adds confusion to candidate

Candidate approval rating: 3/5

At this point I journey mapped the candidate experience looking for pain points and opportunities

Unfortunately I no longer have access to this work.

Personas

To build on my initial research, I spoke with the customer success team, interviewed customers, and reviewed feedback from candidate surveys. This helped me create updated, relevant personas.

The two main candidate personas were graduate applicants and experienced candidates. Each group faced different challenges, such as varying levels of confidence with technology and comfort with game-based tasks in the application process. While there were some differences in experience and age, much of the feedback and key concerns overlapped.

Customer personas followed a similar pattern, focusing on graduate recruiters and strategic hiring managers, both of whom were recruiting for these core candidate groups.

Candidates

Customers

Personas were used to guide difficult decisions across the redesign of the original report, offering insight into the different viewpoints of the user and candidates

Competitor analysis

I investigated multiple similar businesses offering psychometric and trait based reviews of candidates/users.

My focus was predominantly on how data on the candidate was presented and shared.

Traitify

Assess First

I expanded my analysis to include other data-orientated assessments such as 23andMe

23andMe

Synthesis and findings

Based on personas, user journeys, competitor analysis and direct feedback the following key findings were made:

Candidate requirements

Responsive web-application
Allowing the candidate to easily open and read their report on their mobile device or desktop

Navigation
Providing quick and easy access to areas of the report

Trait clarity
Further information on how each trait applies to their continued development

Trends
How to make improvements to their score and areas that they may wish to focus on.

The candidate wanted to gain the ability to learn from the experience – adapting and improving their behaviours.

Customer requirements

FAQs
Reduced contact with the customer from candidates on how to interpret the report

Less specificity
Reduce the number and accuracy of traits that the candidates ask about

Value
Act as an additional service for candidates outside of the customers hiring process

The customer wanted to provide valuable feedback for the candidate without placing the onus on them. Reducing time and resource for hiring managers and recruiters.

Exploration

Page navigation

I began exploration of the page navigation in 2 separate styles:

Single run on page: Similar to the PDF but on a infinite scroll web-application, allowing the candidate the ability to easily scroll through their results

Section-based design: Working closely with the psychometric SMEs we split the traits into personality based sections, focusing on specific skills, for example, Thinking Style.

Single page

Original run on design

Initial colour and illustration updates

Example of run on design on mobile

Section-based (mobile)

Sectioned area showing run on

Mobile personality sections

Mobile survey area

Score and feedback

Scoring was extremely important from a psychometric perspective so all work was completed with SMEs inclusion.

10-point polar scale

Psychometric SMEs suggested moving away from the segmented approach and move toward a 3 point scale, showing below-average, average and above average representing trends.

3-point radial scale

Influenced by competitor patterns, I explored radial scales to show Low, Medium, and High trait scores. I tested this concept internally to validate its usability.

Testing revealed that users required additional explanation to correctly interpret the scores. This reduced visual clarity and added cognitive load.

Since the product was intentionally moving away from highly precise scoring, the need for extra explanation worked against the design goals, leading me to rule out this approach.

3-point polar scale

Initial guidance for candidates and layout

Final scoring decision

Polar scales were ultimately chosen to communicate below average, average, and above average scores in a clear and approachable way. Each point on the scale was supported by concise, descriptive language that explained how the score influenced the trait, rather than focusing on numerical precision.

This approach reduced cognitive load, maintained visual simplicity, and aligned with the product direction of providing meaningful insight without over-accurate scoring. It also helped set clearer expectations for candidates, reducing the need for follow-up questions or external interpretation.

Design

Wireframing and layout

I created medium-fidelity designs for usability testing across both mobile and desktop. These were tested internally and reviewed with engineers to ensure the solutions were technically feasible.

Working at wireframe level allowed for rapid iteration and helped keep feedback focused on navigation and usability, rather than visual styling.

Mobile user journey wireframe

Desktop user journey wireframe

Feedback showed that candidates felt overwhelmed when landing on the page, leading to initial discomfort.

To address this, I introduced an onboarding journey that provided clear guidance upfront and quick access to FAQs. This reduced cognitive load and lowered the likelihood of candidates needing to contact customer support for clarification.

Mobile onboarding journey

Desktop onboarding journey

Release

The initial web-application was released replacing the static PDF

Personality Criteria landing page

Ongoing testing

Following the release of the second version of the Candidate Feedback Report a feedback survey was introduced to get continuous feedback and look for trends

Candidate concerns

Not enough detail

Candidates struggled with average scores.
By presenting only below, average and above candidates found that the amount of detail was not enough.

No opportunity to improve.
The candidate received little to no information as to how to improve their score. As the the assessment was promoted as personality-based, it did not necessarily align with a job role.

Sparse

Design
Feedback provided by candidates suggested that the design was too corporate and sparse.

Layout
Some candidates commented on how broken up the areas were and how it had become a challenge to navigate to exact traits.

Improved rating

Based on the feedback survey the candidate report showed a strong jump in approval rating:

Candidate approval rating: 4/5

  • BIG CHANGES

The move to skill enablers

In an effort to improve test and retest, reliability and future-proof the assessment against AI, a big change occurred in the product moving from 15 personality success criteria to 6 skill enablers

The new approach to scoring which future proofed the assessment against AI and improved reliability occured in the product

These changes meant there would be a reduction in feedback and traits

Learn more about skill enablers >

Skill-enablers™ simplify skills-based hiring and help companies identify high-potential candidates for the AI-enabled workplace. 

Read a case study about the new selection journey.

Opportunity

The introduction of skill enablers represented a significant shift from a personality-based framework to a skills-based approach, requiring a reconsideration of the information provided to candidates. Reducing the model from 15 personality-based success criteria (encompassing more than 40 individual traits) to a total of six traits risked exacerbating existing concerns that candidate feedback lacked sufficient detail and depth.

Layout: fixing the sparsity

I reviewed the report layout after feedback showed it felt too sparse on desktop.

Existing single column

The existing single column felt bare without context or introductory information

Updated multi column

Additional information
Introduction and detail was added to the left column, this included management of the report (including, downloading a PDF version)

Layout
The layout introduced two main areas of information, the introduction/management area and the skill-enablers

Scoring: increasing the detail

The aforementioned move to 3-point polar scale provided too little detail, especially for candidates who were average across the board.

Based on input from psychometric SMEs, I adopted a four-point scoring approach with no average score and developed clear, candidate-focused language to explain this methodology. I also expanded the information provided for each skill, reducing the total number of traits while delivering richer insight and more actionable feedback on how candidates could develop and improve in each area.

3 point scale

The existing scale showing only below, above and average

4 point scale

Updated scoring and design, including descriptions

Skill-enabler design

The final skill-enabler score design presented score and description of traits and ways to improve the candidates score.

Final designs

Onboarding

The final onboarding journey was redesigned to match the onboarding journey of the assessment

Desktop

Mobile

Single column with horizontal scrolling for the traits

Outcome and impacts

In the final round, the third iteration received consistently positive feedback, with participants describing the experience as friendly, lively, and intuitive. The proposed solutions were validated through both internal and external usability testing, which confirmed their effectiveness for users.

Candidate feedback

After release testing continued to understand whether the new designs met the concerns initially introduced by the change to the web-application.

Candidates used the following words to describe their experience with the new feedback report:

Friendly

Lively

Caring

Minimal

Intuitive

Simplistic

Pleasant

Concise

Flowing

Insightful

Captivating

Welcoming

Clear

Informative

Helpful

The general feedback remained positive continuing the trend of a positive candidate approval rating

Candidate approval rating: 4/5

Reflection and key learnings

Test early, iterate often:

Testing with actual users reveals often subtle decision pain points

Too much detail

Is as dangerous to the customer and candidate as too little

Future proof

Scaleable design layout allows for the introduction of new skill-enablers and information easily

Differing personas

Designs must account for differing users.

Technology competency

Candidates still wanted the ability to download PDF files to keep or to easily refer.

Test early, iterate often:

Testing with actual users reveals often subtle decision pain points

Differing personas

Designs must account for differing users.

Too much detail

Is as dangerous to the customer and candidate as too little

Technology competency

Candidates still wanted the ability to download PDF files to keep or to easily refer.

Future proof

Scaleable design layout allows for the introduction of new skill-enablers and information easily

Looking for an experienced
UX Designer?

Looking for an experienced
UX Designer?

Looking for an experienced
UX Designer?