Building the infrastructure for AI quick actions

Objective

We had recently built out a new feature to allow our customer’s students to search through their course material with the help of an AI chat bot. We wanted to see how far we could expand the capabilities of this bot.

We knew our customers were spending hours creating course support materials like study guides, reflection questions, and even social media content. A potential use case emerged — what if our LLM could help our customers with content creation, too?

Solution

A personalized AI trained on our customer’s content, capable of creating course support materials in matter of a few clicks.

I ran thorough interviews with our customers, research on other ML content creation tools, and collaborative internal testing, allowing us to release a set of AI-quick actions for our educators.

Team

Developer
CTO/Project Lead
Designer (me)

My Responsibilities

Research
Prototyping
Final designs
QA with engineering

Running customer interviews

During customer interviews, we heard them spending a lot of time creating things like:

Finding an opportunity area

The world is full of AI tools attempting to help you create content. The flow goes something like this:

1. User wants to create content. They write out a quick prompt.
2. AI generates mediocre response
3. User has to add more detail. Might receive something useful, but only after a lot of extra effort.

However, we saw a new opportunity.

Our target audience had an original piece of content (their course) that they wanted to create more resources from.

Instead of users needing to write long prompts, what if this original piece of content (an initial asset) could serve as a prompt to create additional resources (derivative assets)?

1. User has something they want to create more content from:
an initial asset
2. AI analyses the initial asset and automatically creates new, related pieces of content:
derivative assets
3. User immediately receives something of value, without extra effort

Industry research

With this new opportunity in mind, I set out to do some industry research. The goal was to understand whether an initial asset > derivative asset flow exists.

A key takeaway from the research was that default actions were crucial. LLMs can produce incredible answers, but only when the original prompt is great. Customers often struggle with how to ask for the type of content they wanted—default actions were a way to solve for this.

Aligning on Architecture

Based on the industry research and our own user interviews, we had uncovered the building blocks.

Initial Asset

An original piece of content from our customer. In the case of educators, this would be a lesson.

Derivative Asset

Additional pieces of content our customer wants to create based on the Initial Asset. This is what we’d need to templatize.

Parameters

A set of customizable settings for each derivative asset, so our customers could receive personalized output.

However, this is a feature we’ll continue expanding on, to cover more content types and parameters. We needed to build in a flexible way, that would allow us to expand and iterate in the future.

Aligning on this flexible architecture allowed our engineer to get started with building the back-end, while I got to work designing the user experience.

Design Exploration

The user experience we had in mind was simple:

1.

As a user, I upload content to Zolt (initial asset)

2.

I have a list of quick-actions that I can choose from to create more content (derivative assets)

3.

I can customize certain details (e.g. my goals, the intended audience etc.)

4.

I receive a totally new piece of content, based off my initial asset

Once we had the flow down, I moved to mockups. The big UX questions were:

1. How should users be able to access the quick-option assets?

Option 1

Reuse existing default questions: In our AI chat, we displayed a set of default questions for users to choose from. We could reuse this feature for the quick-action items.

Option 2

Add quick-actions to text field UI: We could display a pop-over in the text field, allowing users to select from the different quick-action items. This is a familiar UX across different chats, when selecting attachments.

Option 3

Dialog creation flow: Add quick-actions into the global +Create button, and have users go through a dialog creation flow.

How should users be able to customize parameters?

Option 1

Two-step pop-over: Users could customize parameters in the same pop-over they selected the initial quick-option asset from.

Option 2

In-line in the chat: Users could customize parameters in the text field, once they had picked a quick-option asset.

Option 3

In a separate “asset library”: Users could customize asset parameters in the “back-end”, in a separate settings dialog.

Final Design

We ultimately chose to add the quick-actions into the chat text field, for three key reasons:

As for the parameter settings, we chose to hide them from our customers in the initial release. We wanted to do more testing on which parameters work best, and we wanted to see how our customers would interact with a more simplified version of the quick-actions first.

Users can find quick-action assets in the chat, but cannot yet customize parameters.

Testing

Building LLM tools provides a unique set of challenges, especially when it comes to QA and testing.

As a team, we iterated and tested different parameters and derivative asset quick-actions on various different initial assets that we uploaded into our platform. However, as it is with LLMs, we could never manually manage to test all the potential scenarios a user could create. So, eventually, we decided it was better to put this out there into the real world, and let our customers kick the tires.

Results and Next Steps

The final result was a powerful quick-actions architecture for us to start building on. In an ideal world, we could have done more testing with users before releasing.

However, for our small team, releasing this and being able to get feedback from development partners who were actively in the app was the best solution.

Our goal now is to: