ByteDance

Redesigning AI prompting experience for 40K+ UX researchers.

Timeline & Status

6 weeks
Summer 2024

Shipped

Team

1 designer (me)
1 product manager
1 front-end engineer
1 ML engineer

Tools

Figma
ProtoPie

Skills

Interaction design
Visual design
Usability testing

My Impact

As the sole designer, I drove the research, redesign and implementation of the prompting experience for ByteDance's internal AI tool, shipped 5 MVP features for 40K+ users.

As the sole designer, I drove the research, redesign and implementation of the homepage prompting experience for ByteDance's internal AI tool, shipped 5 MVP features.

37% +

task success rate

26% +

monthly user retention

40K +

internal users daily

12%

improvement in churn

Old homepage in 2023

Shipped redesign in 2024

Highlights

Core redesign features: agentic modes, prompt library, templates, madlib edit, and new visual icons.

Agentic Modes

Madlib Edit

Prompt Library

Templates

New Icons

Context

A high demand AI tool to help UX researchers to generate 3 core research output:

Surveys and forms.

Summaries

Interview Scripts

The Challenge

Post 90-day launch data showed a high drop-off rate (~50%) at the homepage.

Up to 50% user drop-off rate led to low task completion rate and low user retention. The product team have asked me to spear head the solution.

User Painpoints

350 surveys, 8 user interviews, 80+ user feedback showed deeper frictions in the existing homepage.

Difficult to follow the overcomplicated 5-step prompting completion process.

Low discoverability for core features such as templates.

Inconsistent input and file attachment formats lead to more error use cases.

Many users struggle to write accurate prompts on their first try.

Design Goal

Simplifying the prompting process based on users mental modal and research artifact's creation process.

Through research, I found that UX researchers work best when the tools follow their natural work process, which are divided into 3 steps.

Modes: choose a type of research output format.

Template: narrow down to a product domain.

Prompt: write specific needs with supporting documents.

Feature One

Three agent modes based researcher's desired output and first step of their workflow.

Collaborating with ML engineers, I proposed the three agentic modes based on the output formats most used by our users: survey, summary & interview.

In close collaboration with ML engineers, I restructured the information architecture based on the capability of AI agents.

Reduced the 5-step prompt completion to 3.

Reduced the 5-step prompt completion to 3-step.

More aligned with user's natural workflow.

Mental modal more aligned with user's natural workflow.

Design Decisions

Iterations of different homepage structure and user flow.

Feature Two

Highlight templates to guide user narrowing down research domain in the second step.

For new users, templates lower the barrier to better results on the first try, introduce structure to more effective prompting.

Feature Three

Prompt library with examples to help user kick-start writing complex inquires.

For users who struggle writing more advanced prompts, the prompt library offers examples and customizable prompts to onboard them easily to complex product areas.

Feature Four

Madlibs offer contextual guidance while giving user flexible agency to edit.

Madlibs give user the agency to compose their own prompts while referencing the examples with contextual information.

Usability Testing

Usability testings with 9 internal users gave us insights on how users want their suggestions to be given.

Feature Five

A refreshed design system to reflect an AI-first product.

Establishing reusable components elevating visual identity to reflect an AI-native design system; including template card, prompt input bar and 15+ new UI icons.

Retrospective

Designing for human-AI interaction in a big corporation have taught me…

Design "with" users, instead of "for" users.

When designing novel features, it's critical to involve users throughout the process, not just at the end for validation. Features like "madlib edit" are examples where user feedbacks drove the major direction of many design decisions.

Learn how AI work on a technical level.

Close collaboration with ML engineers was essential to proposing the three agentic modes. Understanding technical capabilities helped me to design interactions that accurately represent what AI can do.

Design for AI requires constant change.

AI capabilities evolves rapidly, and so is user literacy in AI usage. If I were to redesign it in 2026, I would prioritize more natural conversation input, with "madlib" as a suggestive option.

Driving coordination and alignment

Driving a project from end-to-end, getting stakeholder buy-ins have challenged me in the best way possible. I've learnt what it takes to truly collaborate cross-functionlly from beginning to end.

There's so much more behind the scene!

This is just a glimps of the entire case study.

More

Let's be friends!

© Jen Zhang 2026 Seattle, WA

Rain drop, drop-top;

From sketch to mobile to desktop~