Vacation bidding flow audit for Canadian HR SaaS platform

Reduced HR support tickets by 22% and improved task efficiency by 30%.

Role

UX Researcher

Industry

HR

Duration

3 months

a cellphone leaning against a wall
a cellphone leaning against a wall
a cellphone leaning against a wall

What is Zaddons?

Zaddons is an HR extension that helps unionized employees in Canada bid for vacation time based on seniority and quotas. The company wanted to evaluate how intuitive this feature really was. I joined the team to lead a usability study that uncovered where users were getting stuck and why. The insights helped shape improvements to make the system easier, faster, and more reliable for workers and HR teams.

Understanding the Problem

Vacation bidding is a recurring process where employees need to submit their time off preferences. If the system is unclear, they risk losing their spot and HR gets flooded with support requests. The Zaddons team had concerns about the current experience but lacked data. Our goal was to understand whether users could find key information and submit bids with confidence.

Kick-off meeting

To understand expectations and pain points, we had a kick-off meeting with the Zaddons product manager.

She shared us the following

Product Manager

Wanted to reduce the HR support tickets from the vacation bidding feature

Wanted a usability study of the vacation bidding feature

Setting up the study


Now that we have all clear with the Zaddons' product manager, we defined our goal as the following:

Evaluate the usability of the vacation bidding experience in Zaddons, focusing on how users understand and complete core actions like checking their ranking and submitting a vacation bid.

We focused on two main flows:

Pre-bidding tasks (finding employee ranking and quota)

Transactional tasks (submitting 1-week and 2-week bids)

The goal was not only to identify friction points but also to understand why they happen and how they impact the employee’s experience.


Tasks conducted during the usability tests


User persona

We began by creating a user persona, Anderson, based on what the product manager shared with us. He represents the typical Zaddons user: someone with a physically demanding job, low familiarity with digital systems, and a clear need to complete tasks quickly and without confusion. Using Anderson as a reference, we recruited 12 participants with similar backgrounds to ensure realistic and relevant insights.


What did we measure

We evaluated the experience across key usability dimensions using both behavioral and attitudinal metrics.

Metric

Usability dimension

Task Success

Effectiveness

Task Completion Time and Customer Effort Score (CES)

Efficiency

Customer Satisfaction Score (CSAT)

Satisfaction

System Usability Scale (SUS)

Overall usability

To complement the numbers, we conducted post-test interviews after each session. This helped us understand why users struggled or succeeded, revealing root causes behind the observed behaviors and pointing to design improvements.

How we collected data

The usability sessions were conducted in a controlled setting using the Ballpark platform. Each session involved a moderator and an observer, following a standardized protocol to ensure consistency.


Detailed protocol


Data analysis

To analyze the results, we combined quantitative metrics from performance and satisfaction scores with qualitative feedback from post-test interviews.


Quantitative analysis

To understand if the differences in task performance were meaningful, we ran paired samples t-tests (since all participants completed all tasks).

We compared:

  • Ranking task vs Quota task

  • One-week bid vs Two-week bid

These tests helped us find out if which task should we focus on our recommendations.


Qualitative analysis

We conducted a thematic analysis using an affinity diagram in Miro to identify patterns in user interviews. Each observation was captured as a sticky note and color-coded: red for negative comments and green for positive ones.

After collecting all notes, we clustered them into meaningful categories based on recurring themes. This process helped us synthesize qualitative insights and complement the quantitative data, allowing us to better understand user pain points, needs, and opportunities for improvement.

What is Zaddons?

Zaddons is an HR extension that helps unionized employees in Canada bid for vacation time based on seniority and quotas. The company wanted to evaluate how intuitive this feature really was. I joined the team to lead a usability study that uncovered where users were getting stuck and why. The insights helped shape improvements to make the system easier, faster, and more reliable for workers and HR teams.

Understanding the Problem

Vacation bidding is a recurring process where employees need to submit their time off preferences. If the system is unclear, they risk losing their spot and HR gets flooded with support requests. The Zaddons team had concerns about the current experience but lacked data. Our goal was to understand whether users could find key information and submit bids with confidence.

Kick-off meeting

To understand expectations and pain points, we had a kick-off meeting with the Zaddons product manager.

She shared us the following

Product Manager

Wanted to reduce the HR support tickets from the vacation bidding feature

Wanted a usability study of the vacation bidding feature

Setting up the study


Now that we have all clear with the Zaddons' product manager, we defined our goal as the following:

Evaluate the usability of the vacation bidding experience in Zaddons, focusing on how users understand and complete core actions like checking their ranking and submitting a vacation bid.

We focused on two main flows:

Pre-bidding tasks (finding employee ranking and quota)

Transactional tasks (submitting 1-week and 2-week bids)

The goal was not only to identify friction points but also to understand why they happen and how they impact the employee’s experience.


Tasks conducted during the usability tests


User persona

We began by creating a user persona, Anderson, based on what the product manager shared with us. He represents the typical Zaddons user: someone with a physically demanding job, low familiarity with digital systems, and a clear need to complete tasks quickly and without confusion. Using Anderson as a reference, we recruited 12 participants with similar backgrounds to ensure realistic and relevant insights.


What did we measure

We evaluated the experience across key usability dimensions using both behavioral and attitudinal metrics.

Metric

Usability dimension

Task Success

Effectiveness

Task Completion Time and Customer Effort Score (CES)

Efficiency

Customer Satisfaction Score (CSAT)

Satisfaction

System Usability Scale (SUS)

Overall usability

To complement the numbers, we conducted post-test interviews after each session. This helped us understand why users struggled or succeeded, revealing root causes behind the observed behaviors and pointing to design improvements.

How we collected data

The usability sessions were conducted in a controlled setting using the Ballpark platform. Each session involved a moderator and an observer, following a standardized protocol to ensure consistency.


Detailed protocol


Data analysis

To analyze the results, we combined quantitative metrics from performance and satisfaction scores with qualitative feedback from post-test interviews.


Quantitative analysis

To understand if the differences in task performance were meaningful, we ran paired samples t-tests (since all participants completed all tasks).

We compared:

  • Ranking task vs Quota task

  • One-week bid vs Two-week bid

These tests helped us find out if which task should we focus on our recommendations.


Qualitative analysis

We conducted a thematic analysis using an affinity diagram in Miro to identify patterns in user interviews. Each observation was captured as a sticky note and color-coded: red for negative comments and green for positive ones.

After collecting all notes, we clustered them into meaningful categories based on recurring themes. This process helped us synthesize qualitative insights and complement the quantitative data, allowing us to better understand user pain points, needs, and opportunities for improvement.

Our main findings

After analyzing the data, we uncovered multiple insights. In this section, we separate the insights per task.

Informational tasks

Task: Find employee ranking and quota

Metric

Ranking Task (Avg)

Quota Task (Avg)

Effectiveness

10 of 12 completed it successfully

Only 2 of 12 completed it successfully

Efficiency (Time)

Avg time: 79.9 sec

Avg time: 113.9 sec (42% longer)

Satisfaction

CSAT: 4.42 / 7

CSAT: 2.89 / 7

Efficiency (Perceived effort)

CES: 3.25 / 5

CES: 2.25 / 5


How did the users feel about the pre-bidding tasks?

”I think that could have [been] made a little bit more obvious where I can find it. (...) Because it was obviously right in my face, but I didn’t know it was the quota.” – P10

“It was hard to find... the rank. It was just a little icon that didn't tell me much.” – P02

I actually have no idea where to find that (quota). – P05

Identifying the pain points

  1. Quota visibility is a major usability issue

Despite being a core element in vacation bidding, most participants failed to locate the quota. Only 2 out of 12 succeeded, indicating poor discoverability. Additionally, the quota task took 42% longer on average than the ranking task. This suggests not only a lack of clarity but also a more time-consuming process, impacting user efficiency and user satisfaction (2.89 out of 7).

Quota is highlighted in green.

  1. High success in the ranking task does not mean clarity

Although 10 participants completed the ranking task, several still expressed confusion, especially around the icon used to access that information. Participants could not recognize key interface elements, like the ranking icon, which added unnecessary steps and confusion.

Ranking highlighted in green.

Bidding tasks

Task: Find employee ranking and quota

Metric

1-week Bid

2-week Bid

Effectiveness

12 of 12 completed successfully

10 of 12 completed successfully

Efficiency (Time)

Avg time: 48.7 sec

Avg time: 74.7 sec (53% longer)

Satisfaction

CSAT: 4.83 / 7

CSAT: 4.25 / 7

Efficiency (Perceived effort)

CES: 2.58 / 5

CES: 2.00 / 5


How did users feel about completing the bids?

“I think the two week bid, I struggled because it wouldn’t let me select two weeks total, which I didn’t understand if it was something that I did or if it was a system blockage because it didn’t say.” - P11

“Some parts are easy to use, but some parts are very confusing and I have no clue how to proceed.” – P05

"I wanted to select the whole period of two weeks, but I had to do this action twice because it didn't allow me to do that." - P11

"Too many clicks. And I didn't understand why I had to do all these clicks." - P01


Identifying the pain points

  1. Two-week bidding flow creates unnecessary complexity

While the one-week bid task was completed successfully by all participants, the two-week bid led to confusion and inefficiencies. Only 10 out of 12 users succeeded, and the average completion time increased by 26 seconds. Satisfaction scores also dropped.

These findings suggest that the interface lacked clear guidance for multi-week bidding. Users were unsure how to perform the task in one action, and several believed they had to repeat the process or ask for help. This impacted both efficiency and confidence, highlighting an opportunity to simplify the bid interaction and reduce friction.

Only one week was able to be selected at a time.

  1. Too many steps to add a bid impacted flow and satisfaction

Several participants noted that the process of adding a bid involved too many clicks and redundant steps. After selecting the days, users were required to validate, then click “Add Bid,” and finally submit, a sequence that felt unnecessarily long and repetitive. Even users who completed the task successfully expressed frustration with the interaction flow, describing it as time-consuming and unintuitive.

Too many additional steps were required to complete a bid.


Recommendations

To guide the product team in prioritizing usability improvements, we classified each issue based on Nielsen’s severity ratings for usability problems. These ratings help distinguish between minor concerns and those that significantly affect the user experience.

Severity 3: Major Usability Problems

Problem

Recommendation

Lack of visibility and clarity around the “Quota” feature

Higher dev effort: Add a brief tutorial at the beginning of the experience to explain key features like quota and ranking. This would require more development effort but can significantly improve onboarding and confidence.

Most users could not find the quota feature due to low visibility and lack of contextual clarity. This caused delays and task failure, especially during critical pre-bidding actions.

Lower dev effort: Improve visual clarity of the quota by changing its color to a brighter option or applying more visual hierarchy. This is a quicker fix that can increase discoverability with minimal development impact.

Tutorial recommendation.

Problem

Recommendation

Lack of system guidance on selecting a bid for 2 weeks

Higher dev effort: Let users select two weeks at once using the date picker.

Users lacked guidance on how to submit a two-week vacation bid. Many did not realize they had to repeat the process twice, which caused delays and confusion.

Lower dev effort: Add a tooltip that clearly instructs the user to select a week, submit, then repeat.

Two weeks selection at a time.


Severity 2: Minor Usability Problems

Problem

Recommendation

Lack of hierarchy or visual emphasis of the ranking information and inconsistent icon

Enhance the visibility of the ranking section by using a clearer icon, adding a descriptive label, and applying bold styling. Pairing icon and text will help users recognize the ranking information at a glance and reduce hesitation.

Users had trouble locating their employee ranking due to weak visual hierarchy and an unrecognizable icon. Many did not realize what the symbol represented or where to find the ranking, leading to confusion during the task.


Problem

Recommendation

Excess steps for submitting a bid

Automatically add selected dates to the bid list after clicking on “Validate”, eliminating the need for an “Add Bid” button.

There were too many steps involved in submitting a bid. The additional “Add Bid” button after selecting dates added unnecessary friction.


Impact

-22%

HR Support tickets requested

+30%

Efficiency in bidding

After the usability recommendations were implemented, the company reported significant improvements in both user experience and bidding efficiency:

  • 22% decrease in HR support tickets
    Users encountered fewer issues and uncertainties, reducing the load on the support team.

  • 30% increase in bidding efficiency
    Employees were able to complete vacation bids more quickly and confidently.

These results highlighted the business value of user research and how targeted design improvements can drive measurable impact in SaaS platforms.

Takeaways

From this project, I learned that combining usability metrics with qualitative insights is essential to truly understand where and why users struggle. Observing task completion alone wasn’t enough, post-test interviews revealed the underlying causes of hesitation, like unclear icons and missing guidance.

Recommendations

To guide the product team in prioritizing usability improvements, we classified each issue based on Nielsen’s severity ratings for usability problems. These ratings help distinguish between minor concerns and those that significantly affect the user experience.

Severity 3: Major Usability Problems

Problem

Recommendation

Lack of visibility and clarity around the “Quota” feature

Higher dev effort: Add a brief tutorial at the beginning of the experience to explain key features like quota and ranking. This would require more development effort but can significantly improve onboarding and confidence.

Most users could not find the quota feature due to low visibility and lack of contextual clarity. This caused delays and task failure, especially during critical pre-bidding actions.

Lower dev effort: Improve visual clarity of the quota by changing its color to a brighter option or applying more visual hierarchy. This is a quicker fix that can increase discoverability with minimal development impact.

Tutorial recommendation.

Problem

Recommendation

Lack of system guidance on selecting a bid for 2 weeks

Higher dev effort: Let users select two weeks at once using the date picker.

Users lacked guidance on how to submit a two-week vacation bid. Many did not realize they had to repeat the process twice, which caused delays and confusion.

Lower dev effort: Add a tooltip that clearly instructs the user to select a week, submit, then repeat.

Two weeks selection at a time.


Severity 2: Minor Usability Problems

Problem

Recommendation

Lack of hierarchy or visual emphasis of the ranking information and inconsistent icon

Enhance the visibility of the ranking section by using a clearer icon, adding a descriptive label, and applying bold styling. Pairing icon and text will help users recognize the ranking information at a glance and reduce hesitation.

Users had trouble locating their employee ranking due to weak visual hierarchy and an unrecognizable icon. Many did not realize what the symbol represented or where to find the ranking, leading to confusion during the task.


Problem

Recommendation

Excess steps for submitting a bid

Automatically add selected dates to the bid list after clicking on “Validate”, eliminating the need for an “Add Bid” button.

There were too many steps involved in submitting a bid. The additional “Add Bid” button after selecting dates added unnecessary friction.


Impact

-22%

HR Support tickets requested

+30%

Efficiency in bidding

After the usability recommendations were implemented, the company reported significant improvements in both user experience and bidding efficiency:

  • 22% decrease in HR support tickets
    Users encountered fewer issues and uncertainties, reducing the load on the support team.

  • 30% increase in bidding efficiency
    Employees were able to complete vacation bids more quickly and confidently.

These results highlighted the business value of user research and how targeted design improvements can drive measurable impact in SaaS platforms.

Takeaways

From this project, I learned that combining usability metrics with qualitative insights is essential to truly understand where and why users struggle. Observing task completion alone wasn’t enough, post-test interviews revealed the underlying causes of hesitation, like unclear icons and missing guidance.

Other projects

Copyright 2025 by Nicolas Peyre

Copyright 2025 by Nicolas Peyre

Copyright 2025 by Nicolas Peyre