We had recently built out a new feature to allow our customer’s students to search through their course material with the help of an AI chat bot. We wanted to see how far we could expand the capabilities of this bot.
We knew our customers were spending hours creating course support materials like study guides, reflection questions, and even social media content. A potential use case emerged — what if our LLM could help our customers with content creation, too?
A personalized AI trained on our customer’s content, capable of creating course support materials in matter of a few clicks.
I ran thorough interviews with our customers, research on other ML content creation tools, and collaborative internal testing, allowing us to release a set of AI-quick actions for our educators.
Developer
CTO/Project Lead
Designer (me)
Research
Prototyping
Final designs
QA with engineering
During customer interviews, we heard them spending a lot of time creating things like:
study guides
reflection questions
blog posts and other social media content
The world is full of AI tools attempting to help you create content. The flow goes something like this:
However, we saw a new opportunity.
Our target audience had an original piece of content (their course) that they wanted to create more resources from.
Instead of users needing to write long prompts, what if this original piece of content (an initial asset) could serve as a prompt to create additional resources (derivative assets)?
With this new opportunity in mind, I set out to do some industry research. The goal was to understand whether an initial asset > derivative asset flow exists.
A key takeaway from the research was that default actions were crucial. LLMs can produce incredible answers, but only when the original prompt is great. Customers often struggle with how to ask for the type of content they wanted—default actions were a way to solve for this.
Based on the industry research and our own user interviews, we had uncovered the building blocks.
An original piece of content from our customer. In the case of educators, this would be a lesson.
Additional pieces of content our customer wants to create based on the Initial Asset. This is what we’d need to templatize.
A set of customizable settings for each derivative asset, so our customers could receive personalized output.
However, this is a feature we’ll continue expanding on, to cover more content types and parameters. We needed to build in a flexible way, that would allow us to expand and iterate in the future.
Aligning on this flexible architecture allowed our engineer to get started with building the back-end, while I got to work designing the user experience.
The user experience we had in mind was simple:
As a user, I upload content to Zolt (initial asset)
I have a list of quick-actions that I can choose from to create more content (derivative assets)
I can customize certain details (e.g. my goals, the intended audience etc.)
I receive a totally new piece of content, based off my initial asset
Once we had the flow down, I moved to mockups. The big UX questions were:
Reuse existing default questions: In our AI chat, we displayed a set of default questions for users to choose from. We could reuse this feature for the quick-action items.
Add quick-actions to text field UI: We could display a pop-over in the text field, allowing users to select from the different quick-action items. This is a familiar UX across different chats, when selecting attachments.
Dialog creation flow: Add quick-actions into the global +Create button, and have users go through a dialog creation flow.
Two-step pop-over: Users could customize parameters in the same pop-over they selected the initial quick-option asset from.
In-line in the chat: Users could customize parameters in the text field, once they had picked a quick-option asset.
In a separate “asset library”: Users could customize asset parameters in the “back-end”, in a separate settings dialog.
We ultimately chose to add the quick-actions into the chat text field, for three key reasons:
It allowed us to easily add more quick-actions in the future
It was easily accessible to users at a moment when they were already interacting with our AI
It matched the UX across other messaging platforms
As for the parameter settings, we chose to hide them from our customers in the initial release. We wanted to do more testing on which parameters work best, and we wanted to see how our customers would interact with a more simplified version of the quick-actions first.
Users can find quick-action assets in the chat, but cannot yet customize parameters.
Building LLM tools provides a unique set of challenges, especially when it comes to QA and testing.
As a team, we iterated and tested different parameters and derivative asset quick-actions on various different initial assets that we uploaded into our platform. However, as it is with LLMs, we could never manually manage to test all the potential scenarios a user could create. So, eventually, we decided it was better to put this out there into the real world, and let our customers kick the tires.
The final result was a powerful quick-actions architecture for us to start building on. In an ideal world, we could have done more testing with users before releasing.
However, for our small team, releasing this and being able to get feedback from development partners who were actively in the app was the best solution.
Our goal now is to:
Validate whether the derivate assets our AI creates are good enough
Understand what types of assets our customers need
Continue improving UX (allow users to customize parameters etc.)