Back to Projects Vibe Coding · Designs by Farzam
FA
Vibe Coding · Product Design

The design didn't change.
The distance did.
Here's what closing that gap looks like.

A real project. A real team. Figma Make, Marvin, and Claude through MCP — and what actually happened when I used all three.

Vibe Coding — VR headset surrounded by AI and design tool logos

I remember the exact moment I stopped being embarrassed to admit it. A stakeholder asked how I'd walked into a regional alignment meeting with a working, interactive product — not a deck, not a wireframe, not a Figma prototype with noodles connecting static frames — but something you could actually click through and use.

I said I vibe coded it the night before.

They didn't know what that meant. By the end of the meeting, they'd approved the direction.

Let me tell you how I actually work now. And to make it concrete, I'll walk through a real project: a sales CRM internal tool I designed for the sales reps and telesales agents at Delivery Hero. It touched every phase of the design process. And it changed how I think about what product design actually is.


What it is

For developers, it was productivity. For designers, it was a permission slip.

Andrej Karpathy — co-founder of OpenAI — posted a throwaway thought in February 2025 that accidentally named something millions of people were already doing: "There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists." The tweet got 4.5 million views. Collins English Dictionary named it Word of the Year 2025.

Karpathy was describing developers. But for designers, it arrived like a permission slip.

"If an LLM wrote every line but you reviewed, tested, and understood it all, that's not vibe coding — that's using AI as a typing assistant."

— Simon Willison, programmer and researcher

Here is what it means for me specifically: I describe what I want in plain language. Claude generates it. I evaluate the output with my designer's eye — does it feel right, does the hierarchy work, does it solve the user's actual problem — and I prompt again. I am not reading every line of code. I am working at the level of intent and outcome. The code is a detail. The design thinking is the work.


The Project

A sales CRM that needed to work for real people in real pressure

Delivery Hero's sales reps and telesales agents are not sitting at a desk with time to learn a complicated tool. They are on calls, on the road, hitting targets, moving fast. The CRM tool they had was not built for how they actually worked. My job was to design something that was.

This was not a greenfield product. It had internal stakeholders across multiple regions, each with opinions about what mattered. It had real users with real frustration. It had a design system already in a GitHub repository. And it needed to move fast — this was not a six-month design cycle with a phase-gate handoff at the end.

So I changed how I worked. And this project became the clearest proof I have that vibe coding is not a shortcut. It is a different — and genuinely better — way to run a product design process.

SalesHero CRM Dashboard — the final product showing recommendations, outreach tracking, and pipeline metrics
The finished product — a CRM built for the speed of a sales call, not a project plan.

Phase 01 · Discovery

Marvin sits in my user interviews so I can focus on listening

Research is where product design begins, and it's also where traditional workflows waste the most time doing the least interesting work. Transcription. Tagging. Affinity mapping. Five tools doing what should be one job.

For the sales CRM, I ran a round of user interviews with the actual reps and telesales agents — the humans who would use this every day. I added Marvin to every meeting invite. Here is exactly how that works:

  • 1Marvin generates a project-specific email address. You add that email as a guest to your calendar invite — that's it.
  • 2When the meeting starts, Marvin joins as a notetaker. It admits itself into the session, sits there, records, and transcribes in real time.
  • 3When the call ends, I open Marvin's dashboard. A structured summary is already waiting: themes, notable quotes, patterns across sessions.
  • 4I query across all sessions at once: "What are reps most frustrated about mid-call?" Every answer comes back with the exact quote, participant, and timestamp.

Here's what those sessions looked like:

Usability interview — Muhammad Hafizuawan Ishak presenting the SalesHero dashboard during a live session
Muhammad · Sales RepLive workflow walkthrough. Marvin in the call — so I could actually watch and listen.
Usability interview — David Cabal showing the Google Sheets tracker the team used before the CRM
David · Sales ManagerThe Google Sheets tracker his team lived in. The pain point, documented in the moment.
Usability testing session — navigating the CRM prototype with Marvin recording in the background
Research sessionPrototype in hand, Marvin running in the background — every hesitation recorded.
Marvin — Sales Ecosystem project dashboard with interview recordings and AI project updates
Every session auto-filed. Themes and insights surfaced before I'd even closed my laptop.
Marvin — AI project updates panel showing synthesised findings across all interviews
Marvin's cross-session synthesis — every finding rated, every quote cited and sourced.
Marvin — Analyse board showing affinity-mapped note cards from all user interviews
Every quote, queryable. Filter by theme, source, or question — no sticky notes, no exports.

The synthesis that used to take me three or four days now takes an afternoon of validation. I'm not creating the patterns from scratch. I'm interrogating them. I'm not trusting a summary — I'm reading the source.

"UXR has to match the speed of product. Without Marvin, UXR becomes a hurdle to be jumped. With Marvin we can produce insights at the pace of product."

— UXR Lead, G2 Reviews on HeyMarvin

What I found: the reps weren't frustrated by a lack of features. They were frustrated by the time it took to find what they needed during a live call. Their anxiety was about speed, not capability. That became the design principle everything else was built around.


Phase 02 · Ideation

Figma Make is where I think before I commit

I go to Figma Make the same way I used to go to a sketchpad. Not to finalize. To think out loud.

The standard Figma prototyping workflow — placing components, wiring interactions, connecting frames with prototype noodles — takes time I don't have when I'm still in the "what if" phase. Figma Make removes that friction entirely. It launched in May 2025, built directly into Figma and powered by Claude. I type what I want. A working, interactive prototype appears. I use the Point and Edit tool to click into any element and adjust it directly — padding, color, copy, spacing. Then I prompt again.

Before a single prompt was written, I set up Figma Make with the full Cape design system — so every idea that came out looked and felt like a real Delivery Hero product, not a generic wireframe.

Figma Make — Bring in design system context
01Setting up the kit with Cape design system context
Figma Make — Installing Cape npm packages
02npm install — Cape Core, Icons, and Tokens wired in
Figma Make — package.json with Cape dependencies
03Full Cape dependency tree — live in the kit
Figma Make — Guidelines generated, ready to ideate
04Guidelines auto-generated. Kit live. Ideation starts now.

For the sales CRM, I ran the whole initial ideation pass in Figma Make. Different card layouts for the prospect list. Different ways of surfacing call history. Different approaches to the action flow mid-call. I wasn't precious about any of it. These were ideas rendered fast, disposable by design, alive long enough to be judged.

"When generating a prototype in Make, it looks like something a designer would create, whereas other tools look like something an AI generated — and that gap matters."

— Figma Make reviewer, LogRocket

When one layout caught — when the priority stack at the top felt right, when the call context panel sat at the right visual weight — I committed it and moved on.

The IDEO research data reflects what I experience: people who use AI during ideation generate 56% more ideas and 13% more variety. That's not because the AI is more creative. It's because blank-page drag consumes creative energy before a single real idea has been explored. Figma Make removes the drag. The creativity is still mine.


Phase 03 · Stakeholder Alignment

I showed stakeholders a prototype, not a deck

I took the Figma Make prototype into my stakeholder reviews — internal teams, and regional stakeholders across Delivery Hero's markets. Not a deck. Not a set of annotated Figma screens. A working interactive prototype they could click through, feel, and react to.

The conversations were different. When you show a deck, people debate slides. When you show a working prototype, people debate the product. The feedback is specific. The decisions are real. Regional stakeholders could see immediately how the tool would feel in their teams' hands.

Here's what that looked like on this project:

1 night
to go from insight to stakeholder-ready interactive prototype
Direction approved in the first regional stakeholder review
0 decks
I showed a prototype, not a presentation

Figma's own PM team has moved in this direction entirely — replacing PRDs with Figma Make prototypes. Show, don't tell. I had been arguing for this for years. Now there is tooling that makes it effortless. The direction got approved. Not because the prototype was polished. Because it was real enough to decide from.


Phase 04 · Usability Testing

Marvin is still in the room. Still listening.

I took the same Figma Make prototype back into usability testing sessions with the actual reps. Same process: Marvin's project email on the invite, Marvin joining as a notetaker, Marvin recording and transcribing while I focused on watching the user.

Something surprising happens when you test AI-generated high-fidelity prototypes instead of traditional Figma click-throughs: the feedback quality changes.

When I put something that looks and feels like a real product in front of a user, they stop giving me design feedback and start giving me product feedback. They stop saying "I think this button should be bigger." They start saying "I don't know what I'm supposed to do after this step." That is the feedback that actually matters.

Marvin captured all of it. By the end of the round, I had a usability testing report synthesized from the sessions — themes, failure points, moments of confusion, what worked. The report didn't write itself, but the raw material was organized and queryable before I started writing. I spent the session time learning. I spent the synthesis time deciding.


Phase 05 · High-Fidelity Design · In Progress

Claude builds it. I decide if it's right.

The project kicked off in late February 2026. The deadline is June. We're about nine weeks in — discovery done, prototype approved, direction locked. What's happening now is the part that takes longest and shows least: turning approved designs into production-ready code, screen by screen.

That's where Claude and Figma work as one system.

The design system lives in a GitHub monorepo — Cape Core components, Cape tokens, and the Partner Expansion layer with brand-specific CSS variables (--cpx-*) for Delivery Hero's multi-brand theming. I've wired Claude directly into both Figma and the codebase using MCP — Model Context Protocol, the open standard Anthropic released in November 2024. MCP gives Claude a live, persistent connection to external tools — not a one-shot API call, but a continuous context that holds across the entire session. In practice: Claude reads the Figma file and the codebase at the same time, in the same prompt.

Here's how the loop runs

  • 1I bring the approved Figma Make frames into Figma Design and define the final component architecture — which Cape components map to which UI elements, the exact spacing system, the interactive states, the responsive behaviour.
  • 2I select a frame and invoke Claude via the Figma MCP plugin. Claude doesn't receive a screenshot — it receives the raw Figma design tree: every layer name, every Auto Layout constraint, every component variant key, every token reference and spacing value, read directly from the Figma API.
  • 3Simultaneously, Claude reads the GitHub codebase. It knows which Cape components are already imported, which CSS tokens are already consumed, which patterns are already established in the repo. The output isn't a generic implementation — it's code that slots into the existing architecture.
  • 4I render it in the browser and evaluate: does the hierarchy match what I decided in Figma? Does the spacing system hold outside the canvas? Does the component communicate the right priority at first glance?
  • 5If it does, it goes to engineering review. If it doesn't, I either prompt with specific corrections or go back to Figma and revise — because sometimes rendering it in code reveals a design decision that was wrong before a single line was written.

We're building it in sequence, not all at once. Dashboard came first — the most data-dense surface, the one that had to hold up under the pressure of a real sales morning. Pipeline is being resolved now — the vendor list, stage filters, action queue. Communications is next. Each surface goes through the full Figma-to-Claude-to-browser loop and into engineering review before the next one starts.

"When you use Figma MCP, Claude reads your actual Figma — every color, every spacing, every component variant. Literally no more design-to-dev translation loss."

— XDA Developers, on Figma MCP + Claude Code

The June deadline is fixed. The screens are still resolving. But the loop — Figma to Claude to browser, back to Figma if needed — has compressed what would have been a two-month back-and-forth into a cycle I can run in hours, alone, with no translation layer between the design and the code.

SalesHero CRM — screen 1
The Dashboard — recommendations, metrics, and today's outreach progress at a glance.
SalesHero CRM — screen 2
Pipeline breakdown and team leaderboard — stage activity, follow-up queue, and how you rank.
SalesHero CRM — screen 3
Communication hub — inbox and today's schedule side by side, the full day in one view.
SalesHero CRM — screen 4
Pipeline list view — every vendor, their grade, stage, and next action, scannable at speed.
SalesHero CRM — screen 5
Overdue flags, hot lead tags, action queued — the status reads in seconds, no hunting.
SalesHero CRM — screen 6
AI Co-Pilot live in the call — listening, ready to surface facts as the conversation moves.
SalesHero CRM — screen 7
Call completed. Discussion points and recommended next actions generated instantly.
SalesHero CRM — screen 8
One tap to schedule the follow-up — date, time, and context pre-filled from the call.
SalesHero CRM — screen 9
Share the onboarding link via email or WhatsApp — one action, vendor in the flow.

What Changed

The shape of design work has shifted. Not shrunk.

By Spring 2026, 50.8% of designers use AI-assisted workflows every week. The most-used design tool after Figma is now an AI. Research from arXiv confirms what I experienced on the sales CRM project: AI-assisted product design teams work in a four-stage loop — ideate, generate, debug, review — and for some teams, it has eliminated the need for pixel-perfect mockups entirely.

But the more interesting number is from the Figma 2025 AI Report: only 54% of designers say AI improves their work quality, versus 68% of developers. I think I know why that gap exists. Developers use AI to write code. Designers use AI to make decisions — and AI can generate an output that looks right while being wrong in ways that only judgment can detect.

My job has not gotten smaller. It has gotten more concentrated.

I still do the research. I still talk to users — Marvin didn't eliminate that, it made it more efficient. I still make the call about which insight is the right one to design from. I still feel when a layout has the wrong rhythm before I can explain why. And I bring that judgment into every Figma Make prompt I write, every Marvin query I run, every Claude implementation I evaluate.

"AI is going to help humans explore much faster, go much further in their ideation, but the human judgment, empathy, craft, and taste — that's what it means to be the pilot not the copilot."

— Figma's Head of AI Product

The tool works. The reps use it. And I ran the whole process faster than I would have run the research alone two years ago.

The drag is gone. The thinking is still mine.

That is what vibe coding means for a product designer. Not less work. Different work. The right work — at the right pace, with the right tools in the right hands. Mine.


Sources & Context

Andrej Karpathy coined "vibe coding" in a February 2025 post that received 4.5 million views. Collins English Dictionary named it Word of the Year 2025. The Spring 2026 State of Prototyping survey was conducted by UX Tools across 1,478 designers. Figma Make launched at Config in May 2025, powered by Claude. HeyMarvin is an AI-native qualitative research repository. Claude's Model Context Protocol was released by Anthropic in November 2024. Figma 2025 AI Report surveyed 2,400+ design and development professionals. IDEO ideation research published 2024. arXiv paper: "Vibe Coding for Product Design" (2509.10652).