Woman in an online meeting using the UX Interview Assistant.

UX Interview Assistant

This project began as a response to a common pain point in UX work: the cognitive load of conducting user interviews. I wanted to create an AI-powered interview assistant that could support designers in real-time, transcribing conversations, suggesting insightful follow-up questions, and helping maintain presence during interviews.


However this wasn’t just about solving a UX challenge. It was also an experiment in how we as designers might work in the future. Using the AI tool Cursor, I built the entire app from scratch, including frontend, backend, and API integrations, without writing a single line of code myself.


A simplified version of the app is live at this link.

Type of project
Personal project

When

Spring 2025

Tool

Cursor, Adobe Firefly

Languages and APIs

React, Typescript, Python + Flask, OpenAI API, Web Speech API

My pain point

During interviews, I found myself juggling multiple tasks, such as:

 

  • Try to understand unfamiliar accents and domain-specific vocabulary.
  • Keep track of time, go through your interview template, and make sure there is enough time for the most important questions.
  • Listen carefully and try to come up with relevant follow up questions.
  • Write down insights and take notes.

 

There is very little room left for being present and emphatise with the user…

The opportunity

Despite seeing so many AI applications out there, I noticed a gap: What if you had an AI-powered interview assistant to help you stay present during the interview?

 

This project was an opportunity to explore how designers can leverage no/low-code tools, creating an AI solution with the help of another AI tool. I wanted to see how far AI-assisted development could take my idea.

 

Using only Cursor and natural language prompts, I built a fully functioning app, complete with frontend, backend, and API integrations, without writing a single line of code. 

The Process (and the learnings along the way)

Process from start to finish, including different activities and milestones.

It all started when I shared my idea with my colleague who said “You should try building it with Cursor. Doesn’t seem too difficult”. I was skeptical but also curious because he made it seem so easy.


So I dove in. I bought a Cursor Pro license and began exploring the interface, figuring out where everything was. I also experimented with switching between different AI models.


After that, I pitched my idea to Cursor, and it helped me get a simple web application up and running. Even without styling instructions, the UI turned out surprisingly decent (see image below). Not flashy, but it looked like a real website you’d come across online. From there, I kept refining the app, adding things like page routing, language switching, and assistant logic. At this stage, the assistant’s behavior was hardcoded: it searched for specific keywords in the transcript and behave accordingly.


One thing I quickly learned was that you can’t let Cursor do everything at once. It gets overwhelmed and starts hallucinating. Over time, I got better at spotting early signs of context overload. Sometimes, something as simple as starting a new chat would solve the problem.

With most of the logic in place, I shifted focus to refining the UI which was the most fun part for me. I quickly realized that I had to be very specific when giving design related instructions to Cursor. It ain’t gonna fix that perfect whitespace on its own. That got me thinking. Maybe designers in the future will play more of a refinement role polishing rough AI generated outputs instead of starting from scratch.

 

I decided to publish the app online via GitHub Pages, mostly because I wanted to see what it looked like on my phone. It also made it super convenient to access the app from any device. Going live felt exciting!

 

But shortly after publishing, I received warnings from both GitHub and OpenAI: I had accidentally exposed my API key to the public. At the time, I barely knew what an API key was, let alone how to handle one securely. No one stopped me; I just followed Cursor’s instructions, right?


Still, it didn’t sound good. I immediately deleted the API key, took the app offline, wiped the version history, and tried to remove any trace of the leak. In the cleanup process, I also accidentally deleted version control in Git… which I didn’t realize until much later. Ironically, it turned out the published app didn’t even use the API key. None of the features relied on it, and even if they had, the key was useless since I hadn’t linked any payment method. So I was lucky in the end…

 

I kept working on the app, but at one point Cursor hit its context limit. I asked for a tiny change, and Cursor rewrote nearly every file, completely ignoring the existing code. The app broke and became unusable. I tried restoring an earlier version from chat history, but Cursor crashed whenever I went too far back. “Maybe I can restore it from version control?” I thought. Nope. Git was gone… So, I rebuilt the app from scratch. Fortunately, it went much faster the second time. I continued improving the UI, especially focusing on making it more mobile-friendly, which can be seen in the image below.

And last but not least, I finally implemented real LLM features in the app! This time using OpenAI’s API (ChatGPT 4o mini), and with an active API key that wasn’t exposed to the public. This upgrade took the app to the next level, shifting it from simple hardcoded logic to true context awareness.


For this part, I continued developing locally, not just to keep my functioning API key private, but also because I didn’t see the need to publish it online. The core idea was already out there, and I learned that this version of the app couldn’t run on GitHub Pages alone. It would require both a separate frontend and backend deployed elsewhere.


With a limited budget, I wasn’t ready to offer my OpenAI credits to anyone who happened to stumble across the app. And if I had allowed users to plug in their own API keys, it would’ve introduced a whole new set of responsibilities around security and data privacy, which would have taken the fun out of the project. I’m a designer, after all, not a backend architect.


Since using the LLM is a paid feature, I also added reminders and modals to alert me whenever I was about to trigger an action that used the OpenAI API. I didn’t want to accidentally burn through my credits with too many requests.

The Final Application

Setup

On the start page, you’re greeted by an image of a cute bot, your interview assistant, along with a short explanation of how the app works.

 

Below, there’s a text input field where you can add context for your interview. You can describe the topic, purpose, goals, or specific areas where you’d like support. Just write naturally, like you would when prompting ChatGPT, Gemini, or similar tools.

 

Next, you choose the logic to use during the interview. You can either go with the hardcoded option, which is free and simple but lacks context awareness, or the AI-powered LLM option, which is smarter and context-aware, but requires an active API key. If you choose the hardcoded assistant, the context you entered will simply be stored as static information and won’t influence its behavior.

 

Once the setup is complete, you can begin the interview. You’ll be taken to a new page for the “Ongoing Interview” where you can select the transcription language, start recording and let the magic happen!

Interview

During an ongoing recording, the transcript is generated in real time.

 

If you selected the AI-powered LLM assistant, you can tap the “Get AI Assistance” button at any point during the interview. This sends the context and current transcript to the OpenAI API, which returns context-aware follow-up questions and key themes. The user can then choose whether to ask the suggested questions.

 

If you’re using the hardcoded assistant, the system listens for specific keywords in the conversation and triggers follow-up questions accordingly. It can also detect moments of silence, which may indicate the conversation is stalling. In those cases, the assistant offers a few suggested prompts to help get things flowing again.

 

The assistant’s suggestions appear in a banner in the bottom right corner of the screen. This was a deliberate design choice to make them stand out visually and prevent them from getting lost in the continuously updating transcript.

Look back

Completed interviews are stored in the “archives” tab.

 

Here you can look back at the interview, what was the setup, what was said, etc. You can also get an AI generated summary of the key highlights and key themes, which also uses the OpenAI API. This can be done for any interview.

 

You can also download the transcript as well as the audio for further review.

Reflection

AI as a design and development partner

This project was an invaluable learning experience in collaborating with an AI development partner. It highlighted the potential to quickly translate UX requirements and design ideas into functional code, greatly accelerating the prototyping phase and perhaps even reducing the need for wireframing.

 

While the AI handled much of the complex coding, such as API integrations and backend setup, achieving pixel-perfect UI details or resolving subtle layout issues required iterative refinement and patience. Overall, our collaboration was complementary, resulting in a powerful partnership.

 

The process also revealed what feels like a significant shift in how we’ll work in the future. There’s no longer a need to rely on conventional, slow, manual methods when AI tools can do it for us. The key is learning how to use these tools effectively. I believe those who embrace this shift will be well-prepared, leading the way forward. The train is leaving the station, and it’s up to you whether you’ll hop on.

The concept itself

Creating this concept has made me realize how many potential use cases there are for AI, especially LLM. We’ve seen AI tools for a while now, helping with everything from design and image generation to producing final outputs. But it got me thinking: why should AI only be used to replace the fun, creative parts of our work? It made me reflect on whether there’s also value in using AI earlier in the design process, during the more manual, sometimes messy stages. That’s where I see potential for a tool like this.


The app how it currently built is not really ready to use for a critical interview setting. The real time transcription is just not accurate enough to handle different dialects, accents and mumbling. While there are better APIs and technology out there for more accurate transcription, these solutions are not provided for free. Then there is also the matter of confidentiality in interviews. Sending entire transcripts of who says what about which companies, to external services, should be avoided due to privacy concerns. 


The purpose of this project was not to build a perfect, market ready app. If I know that the concept works and that key technologies exist, that’s enough for me. If someone wants to take inspiration from this idea and create a more robust, reliable, commercialized version, feel free. I’d be your first customer!