AI is the New UI

  • Morgan
  • 2 Minutes
  • October 16, 2025

We’ve been talking about artificial intelligence and LLMs a lot at work lately. As we are being pushed to become more and more fluent in AI, one of the things that we’ve been trying to figure out is when to invest and how do we know if a product is worthy or worth being part of some LLM.

At work, we have a general LLM that handles all interactions with our customers named Gus. And Gus has MCP servers and tooling and evals, etc. that allow it to connect to many different places and many different features at the company. As Gus becomes better at doing the work that it’s doing, it is more and more useful. Adding utility to an already useful tool becomes a priority. As my team builds new functionality, how do we know which functionality becomes part of the AI system?

I was recently talking to a designer on our team who made the observation that whenever we’re building any experience for anything on our team, we need to think of AI as a user interface that we have to support, it’s very similar to the switch that we did decades ago from desktop only to mobile first.

Whenever we build for our customers, we need to build to support:

I’m not sure how I feel about this requirement, but I do see the value in it for our customers. With LLMs as a single, unified, ubiquitous user interface, the more value you can put into that system, the less confusing it is.

On the one hand, we are putting designers out of a job. They have to be able to adapt and become more visual over time.

The ChatGPT process that includes canvases has already shown me the value of having generic user interfaces that are more than just text as part of an LLM. And I wonder how often we’re going to need to integrate something like a table or a picture or diagram into normal LLM chat.