Ai Dojo Web Application
UX/UI design, Illustration (Q1/2023)


How do we get our customers excited, trust GenAI data, and help our business by improving adoption and retention of brands?
The team
Over 20 people worked on this project in order to deliver within 40 days (from conception to launch). Most of them were stakeholders (PMs and leaders) and data science bot builders, doing the heavy lifting in the back-end creating and training LLMs with our data to determine the final ones to add to the app. I was the only designer leading the end-to-end experience and I worked closely with the team, while engaging daily with the Front-end developer, resulting in a quick hand-off process between us.
Why GenAI
Data from UJET research - Dec/2022 shows that 54% of US customers will choose to communicate with a human over a chatbot because:
-
They are not able to express empathy or provide human-like conversational experience.
-
And the limited ability to answer customers' queries causes frustration.
-
GenAI technology has shown to solve those problems in a fraction of time that it takes to build traditional bots.
The natural language GenAI technology is expected to be top spending by businesses in the global market, reaching US$42.6 billion by the end of 2023, growing at a compound annual rate of 32% to eventually reaching US$98.1 billion by 2026.
So in order to maintain a competitive edge as a leader in the AI space, Liveperson switched its company strategy heavily to GenAI.
The problem
We, at LivePerson, started by asking: How do we get our customers excited, trust GenAI data, and help our business by improving adoption and retention of brands? The problem is that:
-
LLM and GenAI is a 3rd party technology, not integrated with the platform, so it doesn't contain our customers data, and it brings privacy and security concerns to our customers.
-
It was trained with “not clean” data from the internet, which can cause inaccurate or unusable content for a brand.
-
It is also expensive.
The goal
The complexity of this project was reaching the goals of multiple personas, in a tight timeline.
We should offer a playground app where our internal team and customers can:
-
create bots based on LLM/GenAI models that has been trained with the LP "clean" data, to solve customer specific use cases.
-
share conversations, test and evaluate model performance (thumbs up and down).
-
get quantitative (stars rating) and qualitative (free text) feedback on bot response quality and accuracy to improve trust.
For the leadership, the goal was to:
-
Identify adoption interest from brands.
-
Learn about costs, bot building timeframe, and feasibility of offering GenAI bots to customers (build bots faster)
-
And demo the app in the Feb board meeting in Feb 7, 2023 to gather feedback
1- Discover
Solution should address multiple app personas needs, goals and JTBDs
The team had conversations with brands and based on more details from stakeholders, I identified the user personas of the app: LP Data scientists, LP Customer Success Managers, LP bot developers, brand AI managers and wide LP community, including C-level executives. I interviewed users to gather pain points and expectations, and I contributed populating the requirement document from a user perspective.
For example, one of our customers concerns was:
How can I see considerable improvement on the bot responses quality before I commit to investing on this expensive technology?
During my competitor analysis, I identified that 5 competitors were already using GenAI in their product, and 11 more announced upcoming features.
After interviewing 5 users (personas), I had the following learnings and conclusions that helped me identifying the best solution:
-
Automation Managers and Customer Success Managers want an easy, low-to-no code way to build, manage, test, provide feedback, and demo bots trained with customers data, all in one place.
-
Automation and Model Managers want an easy way to control who can build, who can see bots, and share conversations with potential customers.
-
Bot builders want a powerful, well documented and efficient way to build, deploy, manage, test, and troubleshoot bots.
-
100% of the data science bot builders interviewed want a simple way to view and analyze the feedback provided by users.
-
C-level and LP board members want to demo this application to customers asap, in any device, in any size, to take advantage of the high interest in this technology, and potentially leading to a higher selling and revenue generation opportunities.
The proposed solution
Based on insights gained on how to solve the business and user needs, the dev lead and I recommended building the following:
-
a web app based on Material Design library and Vue components to allow for quick development, since it has built-in light and dark themes, multi-device support and accessibility.
-
that uses the latest customer facing brand look and feel, the Curiously Human style.
-
that is organized based on user tasks and jobs-to-be-done, for easy discovery and better task efficiency. It would also facilitate testing and feedback collection.
-
Bot selection + prompt settings + share conversation.
-
Bot management
-
Conversation management
-
Users' management
-
Login/authentication control
-
-
that has built-in analytics with Pendo for usage learnings (telemetry)
-
that offers bot and response level feedback in the UI, in order to gather better insights for the model and to encourage customer engagement.


2- Ideate and design
My impact on product name and visual elements while navigating scope and requirements changes
I used the following product design principles during this project
Innovative and engaging
We are not afraid to take risks! We are innovating with this product, being creative and setting new standards on UI patterns not seen in other LivePerson products
Simple
Less is more! Less steps, less pages, less texts, less clutter. More simplicity, easier and cleaner interface.
Familiar
Material Design icons and components are wildly recognizable reducing cognitive load for users
Contextual
Help users and reduce complexity by showing only what they want when they want. E.g.: UI elements are visible or hidden based on user permissions
Trustworthy
Our customers can easily engage, and provide feedback to improve the model, allowing them to gain trust on the bot's response quality and accuracy.
Accessible
Material Design components and designs have built-in accessibility and WCAG standards.
I created a user journey and task flow, and validated it with the team, which was crucial to help the dev team deliver quickly the right experience without much turn around and revisions.
As a result of our brainstorming sections, I've put together the long term vision Information Architecture for DA, and presented to stakeholders for approval since we would need a buy-in before proceeding. This was the base for our roadmap. After the approval, I created the Inf. architecture for MVP (shown below).
My impact: What should the app be called?
At this point, the team was working under an internal project name that was not customer facing ready. I brought up the need of a better name for the product, so led a brainstorming session with the SVP of Conv. AI, and stakeholders. After discussing different options, the team agreed use the name Ai Dojo, which reflects well the playground and bot training aspect of the app but still in a fun and creative way. The marketing team approved its use for the MVP launch.
For the UI, I explored different patterns for the navigation, for example drop-down menus, static cards, dynamic cards, carousel cards, etc. But in the end, I chose the carousel cards because it solves 2 of our main goals for the navigation:
-
Should be easy to select and identify the active bot.
-
And help the user to make a decision by providing a description for each bot.
Next I wireframed my vision, including the main sections and their responsiveness to guide dev to choose the right layout for the app.

DESIGN CHALLENGE #1: What illustrations can I use?
When I was designing the login screen, I contacted the Marketing team for a custom illustration so I could align product with brand, but I was informed that the website illustrations were being deprecated and could not be used anymore. They gave me NO guidance on what I could/should use. My solution was to use the current in-product illustration style, but customize it to accomplish my vision for this app.
Result: This approach allowed me to quickly deliver a beautiful, fun, and still LivePerson relatable illustration for this app.


Login screen first draft with "curiously human" illustration (left) and final result with my custom illustration (right). Click image to enlarge.
DESIGN CHALLENGE #2: Increase of scope and requirements during the design and development phase
Another challenge was to keep up with the constant increase of scope during the design and development phases because the leaders were accessing the testing environment as we were developing the app. Due to the quick turnaround needed, the design and development of the features were done almost in parallel, In order to make sure I was still overseeing the quality of the experience during those rapid changes, I had to adapt and simplify the hand-off process with the Dev Lead. Even though he was in Australia I met with him daily to discuss designs, feasibility and brainstorm alternatives as needed.
Result: We had near zero technical rework on those designs, while still delivering features on time.
DESIGN CHALLENGE #3: UX copy is not scalable with current card component design.
Since we were planning on building around 5 bots for mvp, I was expecting to revise the descriptions for the bot cards myself in order to keep them short and meaningful. However, in the middle of the project, the team defined that bot developers would write the description for their own bots, which ended up being very long. It became an issue because the card component I was using from our design system did not accommodate those long descriptions. Once I identified the problem, I took the initiative to create some variations and did a quick validation session with peer designers, which resulted in a more scalable and flexible solution for my app.
Impact: I brought the new pattern to the design system designer so it could be updated and added to the design library accordingly.

One of my explorations
3- Validate
Designed an additional feature driven by user insights that achieved 22% engagement across all app users and 79% among CSMs and leaders.
Due to very little time allocated for validation, I had to be creative. In order to make sure all personas and users of the app had a chance to provide feedback before launch, and to make sure we were developing the right thing, I demoed the app to a group of users, all at the same time, and asked for feedback. I triaged the most impactful feedback, which allowed me to focus on resolving only the major UI issues at that time. Most feedback was around copy improvements. Since I was the one doing UX copy for the app, it was a really easy fix.
Learnings from users' feedback:
-
When asking for feedback, make it clear what I am evaluating on the bot and use positive verbiage.
-
Help me understand what each bot do (with better descriptions) and what prompts I can use on those bots.
-
I'd like to download and share the conversations with anyone very easily.
Results:
-
I provided new copy for the feedback screen to the dev lead.
-
I shared feedback with bot builders and gave tips and best practices on descriptions. If I had more time I'd create a doc with instructions and examples to help them.
-
I created the "share conversations" flow, quickly validated it
and hand-off to the dev lead.

Impact: this added feature had great engagement rate, being used at least once by 22% of all app users, and 79% of CSMs and leaders.
4- Build and Release
Flexible and quick design-to-dev hand-off process allowed launching the app on time to be presented to the company board meeting
Because development was done slightly after design, almost in parallel, I had to work closely with dev on a daily basis, making design and experience decisions as he needed them. I was confident we had a very good app since feedback was constantly positive amongst users and stakeholders, but the final test would be the board meeting. See below the final designs that was launched on Feb 7 2023. In the next day, it went officially live for all external users and LivePerson community.


Unauthorized screen. Click to enlarge (all images)
Login screen

Bot selected

Auto summary

Feedback modal

Admin/Users

Library/Conversations

Library/Bots
5- Test and analyze
Post-launch insights simplified the experience, boosted engagement, and enabled new GenAI solutions on the platform.
Post-launch, many users started playing with and providing feedback on our LLM bots. Our Customer Success Managers started demoing Ai Dojo to our customers. It was a huge success. Everyone wanted to create a bot for their customer. At that time, I went to work on another GenAI project: Auto-summary for Agents. When I came back to Ai Dojo a month later, the team had added several new features and capabilities to the app. I used triangulation between user interviews, Pendo analytics, and app analysis to gather quantitative and qualitative data to understand how the app was being used and performing during its first month live.
Post-launch key insights
CSMs got several requests to build more custom bots with customer data. Based on user learnings, I've added to the roadmap a self-serve code-free "Create bot" experience in the UI, which was added as a fast follow post-launch, with the goal to remove dependency on bot developers and increase engagement. It offered "basic" and "advanced" flows, based on the user technical expertise.
The default bot creation flow was super easy, starting from a template that resulted in rapid adoption of the app. In a month, it went from 3 to over 200 bots in the application! And it is still growing.

New collapsable side nav.
Click to enlarge.
I spoke to users, and I’ve learned that CSMs needed to hide and easily find their bots during demos, since now the app had a large number of bots.
As a result, I updated the menu as a collapsible sidebar to maximize demo view, added search and filtering capabilities, and created a “favorites” category, which helped users efficiently categorize and find bots for customer demos.
Based on feedback, we also discovered that the original name was not culturally appropriate. As a result, the app name was changed to Ai Studio.

Self-serve bot creation from templates
Results: Strong Net Promoter Score, great customer adoption intent, high engagement, and quick scalability with big impact on the business.
+90%
net promoter score (NPS)
74%
of customers said they would adopt features using LLM/GenAI technology
100%
of customers that was part of demos directly engaged with Ai Dojo
In a month, from 3 to
234 bots
120x above expectation
In a month, from 5 to
385 users
100x above expectation
As of Mar 21 2023
43 demos
completed
Results: Influenced platform level GenAI strategy and led design of new features
GenAI platform level strategy
At that time I was also involved in the design strategy of other GenAI/LLM initiatives in the platform, like Ai Summary for Agents, Auto Intents generation, Auto complete, etc, giving me an holistic view on how this new technology, added as a paid feature, would increase revenue, and reaffirm LP as a leader within the Conversational AI space.
Impact: As a result of this holistic view, I suggested a platform level legal agreement experience to be added to the roadmap for a streamlined experience for our customers, and better control by our legal/product team as we keep adding more AI features across the eco-system.
Ai Dojo widget for agents
The high adoption interest and great customer feedback from customers, led me to design the end-to-end experience of the app’s next version, which was added directly into the platform as a new widget for agents, within the Agent Workspace. Brands that wanted to test and fine tune their GenAI bots using real consumer conversations could then join the pilot program, enable it to the customer care agents, and let the bots suggest responses for the users in real time.
Ai Dojo Widget - Click to enlarge

Auto-summary for agents
I also led the end-to-end design of a new Auto-summary for agents. It generates a summary for historical conversations within the Agent Workspace resulting in improvement on customer care agent resolution time, and CSAT scores.
Ai Summary for Agents - Click to enlarge

TESTIMONIAL
“I love Ai Dojo. It is so easy to create and manage GenAI bots for my clients. My favorite feature is the "share conversation" button, but the new search and filtering capabilities just made it even better. My customers really enjoy engaging with their bots and can't wait to start using these on their consumer conversations."
A.R. Customer Success Mng. for large Telecom in United States