Zaddons: Vacation bidding flow audit for HR SaaS platform

Reduced HR support tickets by 22% and improved task efficiency by 30%.

Role

UX Researcher

Industry

HR

Duration

3 months

a cellphone leaning against a wall
a cellphone leaning against a wall
a cellphone leaning against a wall

What is Zaddons?

Zaddons is an HR extension that helps unionized employees in Canada bid for vacation time based on seniority and quotas. The company wanted to evaluate how intuitive this feature really was. I joined the team to lead a usability study that uncovered where users were getting stuck and why. The insights helped shape improvements to make the system easier, faster, and more reliable for workers and HR teams.

Understanding the Problem

Vacation bidding is a recurring process where employees need to submit their time off preferences. If the system is unclear, they risk losing their spot and HR gets flooded with support requests. The Zaddons team had concerns about the current experience but lacked data. Our goal was to understand whether users could find key information and submit bids with confidence.

Kick-off meeting

To understand expectations and pain points, we had a kick-off meeting with the Zaddons product manager.

She shared us the following

Product Manager

Wanted to reduce the HR support tickets from the vacation bidding feature

Wanted a usability study of the vacation bidding feature

Setting up the study


Now that we have all clear with the Zaddons' product manager, we defined our goal as the following:

Evaluate the usability of the vacation bidding experience in Zaddons, focusing on how users understand and complete core actions like checking their ranking and submitting a vacation bid.

We focused on two main flows:

Pre-bidding tasks (finding employee ranking and quota)

Transactional tasks (submitting 1-week and 2-week bids)

The goal was not only to identify friction points but also to understand why they happen and how they impact the employee’s experience.


Tasks conducted during the usability tests


User persona

We began by creating a user persona, Anderson, based on what the product manager shared with us. He represents the typical Zaddons user: someone with a physically demanding job, low familiarity with digital systems, and a clear need to complete tasks quickly and without confusion. Using Anderson as a reference, we recruited 12 participants with similar backgrounds to ensure realistic and relevant insights.


What did we measure

We evaluated the experience across key usability dimensions using both behavioral and attitudinal metrics.

Metric

Usability dimension

Task Success

Effectiveness

Task Completion Time and Customer Effort Score (CES)

Efficiency

Customer Satisfaction Score (CSAT)

Satisfaction

System Usability Scale (SUS)

Overall usability

To complement the numbers, we conducted post-test interviews after each session. This helped us understand why users struggled or succeeded, revealing root causes behind the observed behaviors and pointing to design improvements.

How we collected data

The usability sessions were conducted in a controlled setting using the Ballpark platform. Each session involved a moderator and an observer, following a standardized protocol to ensure consistency.


Detailed protocol


Data analysis

To analyze the results, we combined quantitative metrics from performance and satisfaction scores with qualitative feedback from post-test interviews.


Quantitative analysis

To understand if the differences in task performance were meaningful, we ran paired samples t-tests (since all participants completed all tasks).

We compared:

  • Ranking task vs Quota task

  • One-week bid vs Two-week bid

These tests helped us find out if which task should we focus on our recommendations.


Qualitative analysis

We conducted a thematic analysis using an affinity diagram in Miro to identify patterns in user interviews. Each observation was captured as a sticky note and color-coded: red for negative comments and green for positive ones.

After collecting all notes, we clustered them into meaningful categories based on recurring themes. This process helped us synthesize qualitative insights and complement the quantitative data, allowing us to better understand user pain points, needs, and opportunities for improvement.

What is Zaddons?

Zaddons is an HR extension that helps unionized employees in Canada bid for vacation time based on seniority and quotas. The company wanted to evaluate how intuitive this feature really was. I joined the team to lead a usability study that uncovered where users were getting stuck and why. The insights helped shape improvements to make the system easier, faster, and more reliable for workers and HR teams.

Understanding the Problem

Vacation bidding is a recurring process where employees need to submit their time off preferences. If the system is unclear, they risk losing their spot and HR gets flooded with support requests. The Zaddons team had concerns about the current experience but lacked data. Our goal was to understand whether users could find key information and submit bids with confidence.

Kick-off meeting

To understand expectations and pain points, we had a kick-off meeting with the Zaddons product manager.

She shared us the following

Product Manager

Wanted to reduce the HR support tickets from the vacation bidding feature

Wanted a usability study of the vacation bidding feature

Setting up the study


Now that we have all clear with the Zaddons' product manager, we defined our goal as the following:

Evaluate the usability of the vacation bidding experience in Zaddons, focusing on how users understand and complete core actions like checking their ranking and submitting a vacation bid.

We focused on two main flows:

Pre-bidding tasks (finding employee ranking and quota)

Transactional tasks (submitting 1-week and 2-week bids)

The goal was not only to identify friction points but also to understand why they happen and how they impact the employee’s experience.


Tasks conducted during the usability tests


User persona

We began by creating a user persona, Anderson, based on what the product manager shared with us. He represents the typical Zaddons user: someone with a physically demanding job, low familiarity with digital systems, and a clear need to complete tasks quickly and without confusion. Using Anderson as a reference, we recruited 12 participants with similar backgrounds to ensure realistic and relevant insights.


What did we measure

We evaluated the experience across key usability dimensions using both behavioral and attitudinal metrics.

Metric

Usability dimension

Task Success

Effectiveness

Task Completion Time and Customer Effort Score (CES)

Efficiency

Customer Satisfaction Score (CSAT)

Satisfaction

System Usability Scale (SUS)

Overall usability

To complement the numbers, we conducted post-test interviews after each session. This helped us understand why users struggled or succeeded, revealing root causes behind the observed behaviors and pointing to design improvements.

How we collected data

The usability sessions were conducted in a controlled setting using the Ballpark platform. Each session involved a moderator and an observer, following a standardized protocol to ensure consistency.


Detailed protocol


Data analysis

To analyze the results, we combined quantitative metrics from performance and satisfaction scores with qualitative feedback from post-test interviews.


Quantitative analysis

To understand if the differences in task performance were meaningful, we ran paired samples t-tests (since all participants completed all tasks).

We compared:

  • Ranking task vs Quota task

  • One-week bid vs Two-week bid

These tests helped us find out if which task should we focus on our recommendations.


Qualitative analysis

We conducted a thematic analysis using an affinity diagram in Miro to identify patterns in user interviews. Each observation was captured as a sticky note and color-coded: red for negative comments and green for positive ones.

After collecting all notes, we clustered them into meaningful categories based on recurring themes. This process helped us synthesize qualitative insights and complement the quantitative data, allowing us to better understand user pain points, needs, and opportunities for improvement.

Other projects

Copyright 2025 by Nicolas Peyre

Copyright 2025 by Nicolas Peyre

Copyright 2025 by Nicolas Peyre