Weever.ai: Evaluating trust and usability in an AI shopping assistant

Uncovering what makes AI feel human and trustworthy

Role

UX Researcher

Industry

AI

Duration

2 months

a cellphone leaning against a wall
a cellphone leaning against a wall
a cellphone leaning against a wall

Overview

Weever.ai is an AI-powered shopping assistant that recommends products based on user prompts. My role was to evaluate how people trust and perceive usability in this type of conversational AI.

Through a controlled usability study, we explored how search context, whether if its a general search or specific, affects user trust and usability uncovering what makes an AI interface feel more intuitive and trustworthy.

The challenge

The Weever.ai team wanted to understand how users interact with the platform’s AI-powered search and identify which factors most influence the platform's bounce rate and conversions.

Our goal

Our goal was to uncover these factors and identify how Weever.ai could build more transparent and human-like interactions that boost conversions.

Accordingly, we proposed to evaluate usability and trust across two common search scenarios:

a specific search, where users already know what they want, and a general search, where they explore the platform to get ideas.

Research objectives

Compare the level of satisfaction between general search and specific search. 

Compare how effectively users achieve goals between general search and specific search. 

Compare the effort between general search and specific search. 

Compare users’ trust in the AI assistant across general search and specific search, when compared to before using the product.


Setting up the study


To understand how users experience trust and usability in Weever.ai, we conducted a within-subject usability study with 12 participants. Each participant completed two tasks, one general and one specific search, while thinking aloud to capture their thoughts. Therefore, they were given two scenarios:



The session combined quantitative measures and qualitative observation to capture both behavioral patterns and perceived dimensions.

Measure

Dimension

Customer Satisfaction Score (CSAT)

Satisfaction

Customer Effort Score (CES)

Effort

Task Success

Effectiveness

Human-Computer Trust Scale

Trust


At the beginning of the study, participants completed a short task to capture their first impressions of the platform. At the end, these perceptions were compared with their responses in a post-test survey reflecting the overall experience.


After completing each search task, participants filled out a post-task survey to capture their perceptions. The order of tasks was randomized to counterbalance results and minimize potential learning bias.


At the end of the test, we conducted post-test interviews to explore why users interacted with the technology the way they did, uncovering key pain points and opportunities for improvement.

Participants

To ensure relevant insights, we recruited participants who met specific inclusion criteria. The following diagram provides a summary of the participant profile.

Our findings and recommendations

The study revealed how search context, system performance, and transparency directly shaped the users experience. Below are the main findings that highlight where Weever.ai succeeded and where the experience fell short.

First impressions vs Overall impressions

First, users had slightly positive first impressions, describing Weever.ai as modern and smart. Most expected it to act as a product recommendation or review tool, building strong initial trust based on appearance and branding.

"It will give me product recommendations as well as unbiased reviews." Tester 1A


Compared to their initial excitement, users’ overall impressions declined notably after interacting with Weever.ai. Pain points like irrelevant results, slow loading, and limited transparency had a negative effect on users perceptions.


“I searched for basketball gifts, but it gave me a Fisher-Price toy. That’s not what I meant.” Tester 2B

“I don’t get it...What is the reasons of recommending this [product], and not another” Tester 4A

"If it takes more than 2 seconds I would probably just exit" Tester 3A


Finding summary

The following table summarizes all pain points identified in the study, organized by priority. Each includes a corresponding design recommendation to address the issue.

Pain point

Priority

Design recommendation

Irrelevant or inconsistent search results

🔴 High

Display clearer logic behind AI recommendations (e.g., “Why this result?”).

Slow loading time

🔴 High

Optimize performance speed; add a progress indicator or loading feedback to manage expectations.

Lack of transparency about how AI works

🔴 High

Add transparency cues: brief explanations of data sources, tooltips, or messages showing how recommendations are generated.

Limited number of results and missing pricing info

🟠 Medium 

Display a consistent number of results with complete info; if unavailable, provide contextual reasons (“Price unavailable for this source”).

No filtering or customization options

🟠 Medium


Introduce filtering tools (e.g., by price, brand, or source) and a “Load more results” option to increase autonomy.

Conclusion

The study revealed that trust in AI systems depends not only on accuracy, but also on transparency, feedback, and user control. By addressing these factors, Weever.ai can rebuild user confidence and transform its experience from one of uncertainty to one that feels trustworthy and human-centered, potentially boosting conversions.


My takeaway

This project reinforced my belief that humanizing AI starts with trust. Through behavioral research and usability testing, I learned how small design cues, like transparency and giving user feedback, can make complex system feel more understandable and human.

Overview

Weever.ai is an AI-powered shopping assistant that recommends products based on user prompts. My role was to evaluate how people trust and perceive usability in this type of conversational AI.

Through a controlled usability study, we explored how search context, whether if its a general search or specific, affects user trust and usability uncovering what makes an AI interface feel more intuitive and trustworthy.

The challenge

The Weever.ai team wanted to understand how users interact with the platform’s AI-powered search and identify which factors most influence the platform's bounce rate and conversions.

Our goal

Our goal was to uncover these factors and identify how Weever.ai could build more transparent and human-like interactions that boost conversions.

Accordingly, we proposed to evaluate usability and trust across two common search scenarios:

a specific search, where users already know what they want, and a general search, where they explore the platform to get ideas.

Research objectives

Compare the level of satisfaction between general search and specific search. 

Compare how effectively users achieve goals between general search and specific search. 

Compare the effort between general search and specific search. 

Compare users’ trust in the AI assistant across general search and specific search, when compared to before using the product.


Setting up the study


To understand how users experience trust and usability in Weever.ai, we conducted a within-subject usability study with 12 participants. Each participant completed two tasks, one general and one specific search, while thinking aloud to capture their thoughts. Therefore, they were given two scenarios:



The session combined quantitative measures and qualitative observation to capture both behavioral patterns and perceived dimensions.

Measure

Dimension

Customer Satisfaction Score (CSAT)

Satisfaction

Customer Effort Score (CES)

Effort

Task Success

Effectiveness

Human-Computer Trust Scale

Trust


At the beginning of the study, participants completed a short task to capture their first impressions of the platform. At the end, these perceptions were compared with their responses in a post-test survey reflecting the overall experience.


After completing each search task, participants filled out a post-task survey to capture their perceptions. The order of tasks was randomized to counterbalance results and minimize potential learning bias.


At the end of the test, we conducted post-test interviews to explore why users interacted with the technology the way they did, uncovering key pain points and opportunities for improvement.

Participants

To ensure relevant insights, we recruited participants who met specific inclusion criteria. The following diagram provides a summary of the participant profile.

Our findings and recommendations

The study revealed how search context, system performance, and transparency directly shaped the users experience. Below are the main findings that highlight where Weever.ai succeeded and where the experience fell short.

First impressions vs Overall impressions

First, users had slightly positive first impressions, describing Weever.ai as modern and smart. Most expected it to act as a product recommendation or review tool, building strong initial trust based on appearance and branding.

"It will give me product recommendations as well as unbiased reviews." Tester 1A


Compared to their initial excitement, users’ overall impressions declined notably after interacting with Weever.ai. Pain points like irrelevant results, slow loading, and limited transparency had a negative effect on users perceptions.


“I searched for basketball gifts, but it gave me a Fisher-Price toy. That’s not what I meant.” Tester 2B

“I don’t get it...What is the reasons of recommending this [product], and not another” Tester 4A

"If it takes more than 2 seconds I would probably just exit" Tester 3A


Finding summary

The following table summarizes all pain points identified in the study, organized by priority. Each includes a corresponding design recommendation to address the issue.

Pain point

Priority

Design recommendation

Irrelevant or inconsistent search results

🔴 High

Display clearer logic behind AI recommendations (e.g., “Why this result?”).

Slow loading time

🔴 High

Optimize performance speed; add a progress indicator or loading feedback to manage expectations.

Lack of transparency about how AI works

🔴 High

Add transparency cues: brief explanations of data sources, tooltips, or messages showing how recommendations are generated.

Limited number of results and missing pricing info

🟠 Medium 

Display a consistent number of results with complete info; if unavailable, provide contextual reasons (“Price unavailable for this source”).

No filtering or customization options

🟠 Medium


Introduce filtering tools (e.g., by price, brand, or source) and a “Load more results” option to increase autonomy.

Conclusion

The study revealed that trust in AI systems depends not only on accuracy, but also on transparency, feedback, and user control. By addressing these factors, Weever.ai can rebuild user confidence and transform its experience from one of uncertainty to one that feels trustworthy and human-centered, potentially boosting conversions.


My takeaway

This project reinforced my belief that humanizing AI starts with trust. Through behavioral research and usability testing, I learned how small design cues, like transparency and giving user feedback, can make complex system feel more understandable and human.

Other projects

Copyright 2025 by Nicolas Peyre

Copyright 2025 by Nicolas Peyre

Copyright 2025 by Nicolas Peyre