Our Third AI × Design Morning in the Books

Thanks to everyone who joined, in-person and remote. And to Simon, co-host extraordinaire and Google Meet whiz.

If you couldn’t join, here’s a short recap of what went down:

Community demos kicked things off. We saw a receipt printer synced to Google Tasks, a beautiful HackerNews UI, 90s screensavers brought back from the dead, and a fully working second-brain desktop app that started life in our session last month. What an inspiring range of possibilities!

And that was all before we went deep into the current AI workflows of two senior design practitioners, Raphael and Robert, at the top of their game. Here are three things that stuck with me from the session:

Give Claude eyes

Claude can write impressive code, but it can’t ‘see’ how the output renders. Tools like Axiom or the Claude Chrome extension close that loop, allowing the model to see what it built, enabling it to spot what’s off, and where improvements could be made, without human intervention for the obvious stuff. Both speakers talked about this as a key unlock in freeing them up for higher-leverage design work, and overnight agentic workflows.

Parallel agents work, up to a point

Both speakers said two was the magic number of agents running simultaneously. Enough to feel the output gains, but not so many that it fragments your focus. My own setup: one Claude instance mostly in plan mode with Opus for chunky features, another on Sonnet/Haiku for smaller tweaks and bug fixes. My brain can settle more deeply in the first, while the second is an escape hatch for the small things that constantly crop up.

The strongest case for design came from Christine’s community demo

Engineers built a ‘good enough’ new feature. On the surface it would’ve probably passed most UI design quality standards checks, but it took design expertise and judgement to recognise how much higher the ceiling was, and to push the experience to get there. For me this made tangible the nebulous/buzz-word-filled terms like “taste” and “craft” because it directly showed the outsized value of great designers right now. Not on the production-level details, or even the AI quick wins, but for the deep intuitive sense of what’s possible, and how to raise the bar to get there, and noticing the moments that call for it.


We also tried remote speakers (and remote guests!) for the first time. It mostly worked. Ahem. But it also felt a little chaotic with the muting and room logistics, so we’re thinking about whether future sessions should alternate between fully in-person and fully remote rather than hybrid. This would also help unlock the door to working more closely with our US pals. Let us know what you think about this.

And finally, if there’s an AI-pilled designer in your life, creating amazing work with these new tools, that you think would inspire more designers to try new things, please tag them below. We’d love to bring some fresh perspectives to this growing community.