Weever.ai: Evaluating trust and usability in an AI shopping assistant
Uncovering what makes AI feel human and trustworthy
Role
UX Researcher
Industry
AI
Duration
2 months
Our main findings
After analyzing the data, we uncovered multiple insights. In this section, we separate the insights per task.
Informational tasks
Task: Find employee ranking and quota

Metric | Ranking Task (Avg) | Quota Task (Avg) |
|---|---|---|
Effectiveness | 10 of 12 completed it successfully | Only 2 of 12 completed it successfully |
Efficiency (Time) | Avg time: 79.9 sec | Avg time: 113.9 sec (42% longer) |
Satisfaction | CSAT: 4.42 / 7 | CSAT: 2.89 / 7 |
Efficiency (Perceived effort) | CES: 3.25 / 5 | CES: 2.25 / 5 |
How did the users feel about the pre-bidding tasks?
”I think that could have [been] made a little bit more obvious where I can find it. (...) Because it was obviously right in my face, but I didn’t know it was the quota.” – P10
“It was hard to find... the rank. It was just a little icon that didn't tell me much.” – P02
I actually have no idea where to find that (quota). – P05
Identifying the pain points
Quota visibility is a major usability issue
Despite being a core element in vacation bidding, most participants failed to locate the quota. Only 2 out of 12 succeeded, indicating poor discoverability. Additionally, the quota task took 42% longer on average than the ranking task. This suggests not only a lack of clarity but also a more time-consuming process, impacting user efficiency and user satisfaction (2.89 out of 7).

Quota is highlighted in green.
High success in the ranking task does not mean clarity
Although 10 participants completed the ranking task, several still expressed confusion, especially around the icon used to access that information. Participants could not recognize key interface elements, like the ranking icon, which added unnecessary steps and confusion.

Ranking highlighted in green.
Bidding tasks
Task: Find employee ranking and quota

Metric | 1-week Bid | 2-week Bid |
|---|---|---|
Effectiveness | 12 of 12 completed successfully | 10 of 12 completed successfully |
Efficiency (Time) | Avg time: 48.7 sec | Avg time: 74.7 sec (53% longer) |
Satisfaction | CSAT: 4.83 / 7 | CSAT: 4.25 / 7 |
Efficiency (Perceived effort) | CES: 2.58 / 5 | CES: 2.00 / 5 |
How did users feel about completing the bids?
“I think the two week bid, I struggled because it wouldn’t let me select two weeks total, which I didn’t understand if it was something that I did or if it was a system blockage because it didn’t say.” - P11
“Some parts are easy to use, but some parts are very confusing and I have no clue how to proceed.” – P05
"I wanted to select the whole period of two weeks, but I had to do this action twice because it didn't allow me to do that." - P11
"Too many clicks. And I didn't understand why I had to do all these clicks." - P01
Identifying the pain points
Two-week bidding flow creates unnecessary complexity
While the one-week bid task was completed successfully by all participants, the two-week bid led to confusion and inefficiencies. Only 10 out of 12 users succeeded, and the average completion time increased by 26 seconds. Satisfaction scores also dropped.
These findings suggest that the interface lacked clear guidance for multi-week bidding. Users were unsure how to perform the task in one action, and several believed they had to repeat the process or ask for help. This impacted both efficiency and confidence, highlighting an opportunity to simplify the bid interaction and reduce friction.

Only one week was able to be selected at a time.
Too many steps to add a bid impacted flow and satisfaction
Several participants noted that the process of adding a bid involved too many clicks and redundant steps. After selecting the days, users were required to validate, then click “Add Bid,” and finally submit, a sequence that felt unnecessarily long and repetitive. Even users who completed the task successfully expressed frustration with the interaction flow, describing it as time-consuming and unintuitive.

Too many additional steps were required to complete a bid.
Recommendations
To guide the product team in prioritizing usability improvements, we classified each issue based on Nielsen’s severity ratings for usability problems. These ratings help distinguish between minor concerns and those that significantly affect the user experience.
Severity 3: Major Usability Problems
Problem | Recommendation |
|---|---|
Lack of visibility and clarity around the “Quota” feature | Higher dev effort: Add a brief tutorial at the beginning of the experience to explain key features like quota and ranking. This would require more development effort but can significantly improve onboarding and confidence. |
Most users could not find the quota feature due to low visibility and lack of contextual clarity. This caused delays and task failure, especially during critical pre-bidding actions. | Lower dev effort: Improve visual clarity of the quota by changing its color to a brighter option or applying more visual hierarchy. This is a quicker fix that can increase discoverability with minimal development impact. |

Tutorial recommendation.
Problem | Recommendation |
|---|---|
Lack of system guidance on selecting a bid for 2 weeks | Higher dev effort: Let users select two weeks at once using the date picker. |
Users lacked guidance on how to submit a two-week vacation bid. Many did not realize they had to repeat the process twice, which caused delays and confusion. | Lower dev effort: Add a tooltip that clearly instructs the user to select a week, submit, then repeat. |

Two weeks selection at a time.
Severity 2: Minor Usability Problems
Problem | Recommendation |
|---|---|
Lack of hierarchy or visual emphasis of the ranking information and inconsistent icon | Enhance the visibility of the ranking section by using a clearer icon, adding a descriptive label, and applying bold styling. Pairing icon and text will help users recognize the ranking information at a glance and reduce hesitation. |
Users had trouble locating their employee ranking due to weak visual hierarchy and an unrecognizable icon. Many did not realize what the symbol represented or where to find the ranking, leading to confusion during the task. |

Problem | Recommendation |
|---|---|
Excess steps for submitting a bid | Automatically add selected dates to the bid list after clicking on “Validate”, eliminating the need for an “Add Bid” button. |
There were too many steps involved in submitting a bid. The additional “Add Bid” button after selecting dates added unnecessary friction. |

Outcomes
-22%HR Support tickets requested | +30%Efficiency in bidding |
After the usability recommendations were implemented, the company reported significant improvements in both user experience and bidding efficiency:
22% decrease in HR support tickets
Users encountered fewer issues and uncertainties, reducing the load on the support team.30% increase in bidding efficiency
Employees were able to complete vacation bids more quickly and confidently.
These results highlighted the business value of user research and how targeted design improvements can drive measurable impact in SaaS platforms.
Takeaways
From this project, I learned that combining usability metrics with qualitative insights is essential to truly understand where and why users struggle. Observing task completion alone wasn’t enough, post-test interviews revealed the underlying causes of hesitation, like unclear icons and missing guidance.
Other projects
Ciudapolis: Real Estate management mobile app redesign
Increased task effectiveness by 30% and reduced task completion time by 40%
Fractal: SaaS HR platform redesign
Reduced internal HR support tickets by 25% and increased user satisfaction in request submission flows by 18%.
Zaddons: Vacation bidding flow audit for HR SaaS platform
Reduced HR support tickets by 22% and improved task efficiency by 30%.












