Cover

ai led reviews

ai led reviews

ai led reviews

How should AI responsibly enter high-stakes performance review writing?

ROLE:

PRODUCT DESIGNER

TIMELINE:

JAN 2026 - PRESENT

SKILLS:

DISCOVERY, AI PROTOTYPING, RESEARCH, DESIGN

ABOUT

As AI rapidly evolves, the question isn’t whether it should assist with writing — it’s how much, where, and under what guardrails.


This space also represents a critical business opportunity: performance reviews are central to Lattice’s value and a key driver of retention. By introducing data-rich AI writing grounded in real performance signals, we can differentiate Lattice, improve the quality and fairness of reviews, and deepen engagement across the broader Lattice ecosystem.

CHALLENGE

Writing reviews today is a high-effort “blank page” exercise. Users must reconstruct months of work across scattered 1:1s, goals, feedback, updates, external tools.


It was a natural opportunity for AI to be used to aggregate data and reduce the blank page problem. But given user comfort and ethical expectations vary, we were also navigating core questions:

✷ Where is AI helpful versus intrusive?

✷ How much authorship should remain visibly human?

✷ How much should we encourage conversational interaction versus allow users to navigate context independently?

✷ When does transparency build trust and when does it create hesitation?

Showcase image
Showcase image

PROCESS

Before designing anything, we talked boundaries


With an early AI prototype, I led user research to understand where existing gaps lie and how users feel about AI in performance reviews. We learned that evidence recall and synthesis was a huge opportunity, trust really depends on clear and transparent sources, and humans should retain final judgements.


I also led a series of FigJam working sessions with Engineering, Product, and Talent stakeholders to define what a good review looks like + where AI should show up — and where it shouldn't. These boundaries became constraints for every concept we explored.

Diverged to understand intrusiveness


With guardrails in place, we explored three levels of AI intrusiveness to understand where collaboration feels helpful versus invasive. Each direction represents a different balance.

Pre-writing (high guidance): synthesize data gaps upfront, draft in bulk

Step-by-step focused flow (moderate guidance): draft question-by-question with a split screen

Lightweight in-field assistance (low intrusion): embed highlights and functionality within text fields


We weren’t choosing UI patterns, we were rapid prototyping to be able to align & test appetite.

AI Interaction Model


A core question in this work where AI should show up in the workflow. There are two competing models:


Agentic-First (Conversational Default): Users land in a general review state and are encouraged to ask our agent for help drafting and recalling evidence through prompting. Risks: Forces behavioral change, increases dependence on prompting

Embedded AI (AI as Augmentation): Users continue to experience reviews with AI layered in. Risks: AI may feel secondary, lower agent adoption


My POV: AI should show up as an embedded layer, not split focus into an agent experience. We should meet users where they already are rather than asking them to change their behavior and rely on prompting.


This core decision defined how AI integrates across our products.

Explored speed to draft


Across directions, we explored how quickly users could reach a meaningful first draft. Early on, we required more upfront input to establish intent before generating anything. Through internal dogfooding and further research, we shifted toward proactively generating drafts as the default starting point—helping users overcome the blank page while still allowing room for iteration and control.

Designed for trust and transparency


At every step, we’ve considered what happens when Lattice or productivity data is limited. How do we prevent AI from hallucinating or generating vague, overly polished reviews?


To address this, we introduced alternative workflows that prompt users to add more context through the agent when data is insufficient. We also ensure that every insight and example is backed by clear sources and citations, so users can understand exactly where the information comes from.


Redesigned a more intentional Context Panel and Search


Reimagined the Context Panel from a static, product-organized data view into a more structured, intent-driven experience tailored to how different users write reviews. I identified distinct needs across manager, self, peer, and upward review flows, and proposed prioritizing the most relevant signals to reduce noise and improve clarity.


Introduced a unified search experience that allows users to quickly find specific projects, themes, or examples across reviews, 1:1s, and updates—addressing a key pain point of manually digging through fragmented data.


This work lays the foundation for a scalable pattern across Lattice, shifting panels from product-based organization to experiences that adapt to user intent and available data.

TRADEOFFS

Agentic vs Embedded

We prioritized embedded AI over a purely agent-driven experience.

Why: Meeting users in their existing workflow reduces behavior change and lowers the barrier to adoption, especially for high-frequency tasks.

Friction vs Flow

High-risk contexts require guardrails, even if that means slow drafting for instances where data is not available.

Why: Reducing friction everywhere optimizes for speed, but writing reviews without data would break trust.

Maintaining context panel vs Fully agentic experience

We prioritized maintaining a structured Context Panel alongside the agent.

Why: Reviews are high-stakes and require a clear, inspectable source of truth. Relying solely on an agent can obscure how conclusions are formed, while a persistent context layer supports trust, exploration, and independent verification.

Intrusion vs Adoption

Too much upfront guidance may feel heavy. Too little may undersell value.

Why: We still need to understand the appetite for AI intrusion before committing to a pattern that could decrease adoption.

IMPACT

This is early-stage, strategic work focused on setting the right direction before committing engineering resources.


This initiative will define:

  • How AI enters evaluative workflows

  • How AI patterns integrate with broader canvas surfaces

  • Ethical and defensibility guidelines for future AI features


These patterns are intended to extend beyond Reviews into: Feedback, Goals, Updates, Promotions, and more.


What's done:

✓ Discovery research

✓ Design-led prototypes complete

✓ Internal cross-functional feedback gathered

✓ Usabilty research: high intent to adopt, all users the experience in terms of ease of use as 4.5/5

✓ Internal dogfooding


What's next: EAP and further refinement

Showcase image
Showcase image