Lessons learned from leading a design team during the Gen AI explosion (Part One)
13 March 2026 • 4 min read
Llara Geddes is a Solutions Director at AND Digital. She has a background in UX and generating actionable insight to inform strategy and solutions for clients.
In part one of this insightful blog she brings us insights and expert tips from leading a design team through a generative AI explosion.
Keep them peeled for part two, on the AND blog 20th March 2026.
Llara writes:....
"I’ve been in UX for ~14 years, (leading teams for much of that) so Iand can safely say the explosion of Gen AI is one of the most interesting shake ups I’ve seen our industry go through.
I’ve been fortunate to spend the last year working closely with teams of designers as we’ve navigated our way through AI: what it means for design, how we incorporate it in our workflow, and what it means for the products and services we design. In this article, we explore how we can use AI in our design workflows.
I encourage healthy skepticism and questions. But I also believe that those who don’t at least have an informed perspective will fall behind. Based on my time working with teams over the last couple of years, it’s not those who blindly use AI, nor those who refuse to engage at all, but those who question, that will thrive. People who can identify when there’s a good case for using AI, both in their workflows and in the products they design. Those, and who can argue the case on when it’s not appropriate to use it, will build credibility and design the best products in the most effective ways. Or at least, learn how to be open enough to learn along the way.
I have seen enthusiasts on the ground spin up initiatives and tiger teams, and experiment with AI. I have seen Leadership teams with grand aspirations. I have seen practitioners who don’t trust AI and don’t want to use it in any way. And I have seen leaders be slow to adopt, getting in the way of experimentation with overly bureaucratic processesprocess. In a recent article on AI Skills, AND’s Chief for AI, Kenn Van Hauen highlighted research suggesting that only 44% of business leaders believe their workforce is ready for AI.
To embed AI in a meaningful way, you need bottom up enthusiasm - those early adopters who show others what’s possible. Who don’t wait for permission, but seek ways to show what’s possible (within the bounds of security, of course). However, to make progress and have impact, top down buy-in is needed. Without senior buy in and advocacy, security, bloated process and red tape gets in the way. Sometimes the former can drive the latter: when an enthusiast experiments and shows what’s possible
While we’re inarguably seeing a lot of hype and a lot of failed experiments, AI is changing what we do, both in terms of the products and services we deliver and how we deliver them. Here are some of the ways I’ve seen AI work well in design so far, some lessons learned and cautions, and some things to avoid.
AI and Research
There are many contexts within which we might use AI in research - some are genuine time savers, some are red herrings to be avoided.
Synthetic users
There’s been extensive discussion over the last few years about “synthetic users”: services that allow you to ‘do’ user research, without actually speaking to real life humans. Or conducting your research in discussion with Chat GPT or similar. A 2023 article outlines some of the downfalls. While we’ve come a long way since then - we’ve all got better at prompting, assigning our AI a persona, giving it a clear task, relevant context and telling it the kind of outcome we’re looking for - this methodology does not substitute for human behaviour. As we see in the article, it’s open to bias and stereotyping and we run the risk of confidently stated hallucinations, where the AI simply states something that may be untrue. Only by speaking to and observing real people and understanding their behaviour, can we truly validate both problem and solution.
Analysis
Having spoken to real users, I have found success in using AI to save time. Note synthesis, using tools like Chat GPT or Miro has improved significantly over the last couple of years. I find this helpful for expediting analysis. However, it’s critical that this is overseen or reviewed by someone who was involved throughout the research. Without it, nuance and context is too easily missed. This is where rigorous three-gate checks start to come into play (I’ll reference this and human-in-the-loop a lot):
- Gate 1: Immediate Scan (checking format/tone)
- Gate 2: Validation (fact-checking critical claims, testing stability)
- Gate 3: Integration (embedding human sign-off and ongoing monitoring)
We have to review ourselves to be confident that all insight is captured and that context and nuance are considered. A colleague participated in research sessions that were then synthesised by someone else. When she read the findings, she knew that the results were misleading and that additional context needed to be applied - Gate 3."
Check the blog next week for Part Two, covering Revisiting insight, Effective use of AI in Research, AI and Design & Prototyping, Design & Prototyping, Design & Prototyping and a Conclusion.
From more from LLara, check out her Medium