If you’ve reviewed enough designs, you know the moment. The screens are polished. The flows look reasonable. The prototype “works.” And yet.. during review.. small things begin to surface. An edge case no one considered. A missing empty state. A flow that technically connects but doesn’t quite make sense when you say it out loud. A requirement that everyone thought they understood but didn’t.
The sprint slows down. Conversations stretch. Designers pissed.
This doesn’t happen because we aren’t careful. It happens because design review today is built on a fragile assumption: that looking at screens is enough to understand intent.
It isn’t.
The real failure mode of design review
In most teams, design work is fragmented by default.
Product requirements live in documents or PDFs. Research lives in decks. Designs live in Figma. Feedback lives in comments. Analytics lives somewhere else entirely. And most of the time, decisions were made in meetings that you didn’t attend.
When review time comes, we open Figma and try to reconstruct the story in our heads. We rely what we remembered happen. We rely on trust that nothing important was lost along the way.
I wasn’t reviewing intent. I was reviewing screenshots.
Why AI hasn’t meaningfully helped (yet)
Over the past year, we’ve seen an explosion of “AI for designers” tools. Most of them promise some variation of speed: faster screens, faster layouts, faster execution.
But review isn’t about speed. It’s about completeness.
The hardest part of design review isn’t drawing the UI. It’s answering quieter questions:
Did we design all the states? Did we account for failure? Does this flow actually satisfy the original intent? Did research insights make it into the final decisions?
AI struggles here because it’s usually given only the output aka the screens and without the inputs that shaped them. Without access to intent, AI can only guess.
Context isn’t a nice-to-have for review. It’s the entire job.
A different approach
Instead of building another web app or Figma plugin, I tried something simpler.
I turned Cursor into my design review workspace.
Cursor already has everything review needs: files, folders, long-form context, and agents that can reason across documents. More importantly, it’s already where I think when I’m trying to understand a system.
So I built Superdesigner.ai as a lightweight, local workflow.
Not a product that generates UI. Not a tool that critiques visual style. But a system that reviews design intent by reading the same artifacts my colleagues rely on just more consistently and on-demand.
What Superdesigner actually does
Here’s a demo of how it works:
At its core, Superdesigner reads three things: the PRD, the research, and the Figma link.
From that, it produces a structured design review. It reconstructs the intended user flow. It generates a checklist of expected screens. It highlights missing states like empty, loading, error, recovery. It surfaces edge cases described in the PRD that never made it into design.
It generates calm, specific comments that can be posted directly into Figma, grounded in intent rather than opinion.
And finally, the icing on the cake. It creates a Figma Make prompt to generate a new prototype with the proposed solutions baked in. Almost Black Mirror-ish.
It doesn’t judge aesthetics. It doesn’t argue taste. It answers a simpler question: is this design complete relative to what we said we were building?
The shift this enables
Design review shouldn’t start in Figma.
It should start with intent.
When intent is explicit and traceable, visual critique becomes easier and more productive. Conversations move from “I feel like something’s missing” to “this requirement hasn’t been addressed yet.”
Superdesigner doesn’t replace designers. It reduces design regret, the kind that shows up late, costs time, and erodes trust.
Why this works inside Cursor
Cursor isn’t just an editor. It’s a thinking environment.
By keeping PRDs, research, review prompts, and outputs in one place, designers stay in flow. There are no dashboards to manage, no accounts to create, no abstraction layers to learn. Just files and agents doing their job.
That constraint is intentional. Review works best when it’s quiet, focused, and grounded.
How I’m sharing this
I’m open-sourcing Superdesigner AI as a template repository.
You clone it, star it, run it locally, and test it on a real project. I recommend starting small like one PRD, one flow, one review. Judge the feedback. Test different models.
See what it catches. More importantly, see what it misses.
That feedback is the point.
Got a design review or critique this week? Run this once and you’re set!
👉 https://github.com/sherizan/superdesigner-ai
Who this is for
This is for designers who care about craft, design managers reviewing multiple projects, founding designers balancing strategy and execution, and teams shipping fast with AI in the loop.
If you’ve ever looked at a design and thought, this seems fine, but something feels off, this tool is built for that moment.
What’s next
I’ll follow this up with a live YouTube build, deeper workflows that connect analytics (Amplitude MCP) before it generates the final design report (now that’s some Black Mirror stuff right there), and thoughts on how we can evolve this into a practice for modern design teams.
For now, try it. Break it. Tell me what it gets wrong.
That’s how this becomes useful.





The "intent vs screenshots" framing is spot on. I wrote recently about how the design-dev handoff is architecturally broken — this is the same problem one layer up. PRD + research + design all live in different universes, and review becomes archaeology instead of evaluation.
Makes me think the real solution isn't better review tools, but collapsing those universes into one environment. Though I get that's not the world we're in today.
This is great even there are instances where we always have copies and the tone language that could be reviewed based on the design intent.