Podcast Episode Details

Back to Podcast Episodes
Crash Course in AI Product Design from Google Search + Maps Designer, Elizabeth Laraki

Crash Course in AI Product Design from Google Search + Maps Designer, Elizabeth Laraki



Today’s Episode

Everyone’s building AI products wrong.

They’re sprinkling AI on top like fairy dust. Adding chat interfaces to everything. Ignoring 70 years of design principles.

Elizabeth Laraki was one of 4 designers on Google Search in 2006. One of 2 designers on Google Maps in 2007. She helped create products used by billions—products whose designs barely changed for 15+ years because they nailed it from the start.

Today she breaks down exactly how to design AI features that users actually love.

----

Check out the conversation on Apple, Spotify and YouTube.

Brought to you by:

* Vanta: Automate compliance, manage risk, and prove trust

* Kameleoon: Leading AI experimentation platform

* The AI PM Certificate: Get $550 off with ‘AAKASH550C7’

* The AI Evals Course for PMs: Get $1155 off with code ‘ag-evals’

----

Timestamps:

00:00:00 - Intro

00:01:52 - Elizabeth's background at Google

00:04:19 - Google's AI search integration

00:06:19 - Designing image & video for AI

00:09:44 - AI image expander disaster

00:16:05 - Ads

00:17:50 - AI safeguards & human-in-the-loop

00:18:28 - 3-step AI design process

00:31:29 - Ads

00:33:25 - Designing AI voice interfaces

00:38:25 - Designing beyond chat

00:41:52 - AI design tools for designers

00:44:49 - Live design: LinkedIn for AI

00:57:04 - Google Maps redesign story

01:04:14 - Google Maps India landmarks

01:10:09 - Where to find Elizabeth

01:12:00 - Outro

----

Key Takeaways

1. The Core Design Process Hasn't Changed: Define the product (who, what tasks, what needs), Design it (features, architecture, flows), Build it (UIs, brand). Don't skip to "let's add a chatbot" because you have API access. The fundamentals still apply for AI.

2. AI Adds Non-Deterministic Risk: Traditional software is deterministic - click A, get B every time. AI is non-deterministic with unpredictable outputs. Elizabeth's image expander added a bra strap that wasn't in the original photo. Completely unintentional, completely unacceptable.

3. Work With Research on Safeguards: Audit training data for bias. Build evals that flag sensitive content (human bodies, faces, private information). Show A/B options for ambiguous cases. Make AI's work visible in the UI so users can scrutinize changes.

4. Start With Jobs To Be Done: Don't ask "We have GPT-4, what should we build?" Ask "What painful workflow takes users hours?" Descript mapped video editing lifecycle and baked AI into each job: remove filler words, edit from transcript, create clips, write titles.

5. Map User Context, Not Just Needs: ChatGPT voice in car with three kids? Perfect - nobody's looking at screen. Meta Ray-Bans reading Spanish menu item by item? Terrible - should ask "What are you in the mood for?" Same AI, different context requires different design.

6. Emerge From Ambiguity First: For "LinkedIn for AI," Elizabeth mapped 4 possible directions, picked Matchmaking, identified AI's unlock (personality patterns vs keyword matching), mapped separate UIs for job seekers and employ


Published on 1 month, 1 week ago






If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate