The problem nobody is talking about

Language models have a well-documented problem with time: their training data has a cutoff, so they don't know about recent events. Everyone knows this. It's a limitation, it's documented, there are workarounds.

But there's a second, quieter problem that gets almost no attention — and it lives inside the conversation itself, not before it. AI doesn't know when within a thread you're speaking from.

Your 9am message and your 3pm reply are processed with identical weight. There is no elapsed time between them — not from the model's perspective.

Think about what that means for a product built on top of an LLM. A user opens a support chat in the morning, gets a partial answer, goes away for six hours, comes back frustrated — and the AI responds as if no time has passed. As if the frustration has no context. As if the gap between messages is zero.

One

Why timestamps are invisible to LLMs

When you send a message, the platform logs a timestamp. But by the time that message reaches the model, it's been stripped down to plain text. The model sees what you said, not when you said it — unless someone deliberately injected that information into the context.

Most platforms don't do this. It's not a technical impossibility. It's a design oversight that has become a default.

The fix is straightforward: inject timestamps into the system prompt or context window. "User's previous message was sent 6 hours ago. User has returned to a conversation they started this morning." That's it. That's the whole intervention.

Diagram placeholder
Conversation timeline — how elapsed time disappears in the context window
Two

What it means for designers

If you're building AI-powered products, this is a UX problem, not just an ML problem. The model's blindness to time creates conversational experiences that feel wrong even when they're technically correct.

Imagine a journaling app where the AI reflects on what you've written over the past month. Without temporal awareness, every session is treated as equal — a single heavy day reads the same as a calm one from three weeks ago. The model can't weight recency, can't sense urgency, can't understand that something written at 2am is different from something written at 2pm.

"The model knows what you said. It has no idea when you said it, or how much time has passed since."
On temporal blindness in LLMs

This is a solvable design constraint, not a fundamental limitation. The tooling is there. The question is whether designers are thinking about it — and right now, most aren't, because the problem is invisible until you go looking for it.

Video note — temporal context in conversation design
Three

Three patterns worth building

If you're designing an AI-native product and you want to handle time correctly, here are three patterns that actually work:

01 — Timestamp injection

Pass message timestamps into the system context. Even a simple delta — "last message was 4 hours ago" — gives the model enough to modulate its tone and assumptions. This is the minimum viable intervention.

02 — Session boundary detection

If a user returns after a gap longer than a threshold (30 minutes, an hour — depends on your product), treat it as a new session contextually, while preserving memory. The greeting, the framing, the assumed state of mind should all shift.

03 — Recency weighting in retrieval

For products that store long-term user history, recency should be a first-class retrieval signal. What the user said yesterday should generally outweigh what they said last month — unless there's a specific reason to surface older context.

Audio note
Extended thoughts on temporal UX
temporal-ux-notes.mp3
0:00 / 0:00
·

The bigger picture

This isn't just a technical detail. It's a signal about how the industry is thinking about AI product design — which is to say, mostly in terms of capability rather than experience.

The models are getting smarter. The interfaces around them often aren't. Temporal awareness is one small example of a much larger gap: the gap between what AI can do and what it's being asked to actually understand about the humans using it.

Closing that gap is the design work of the next decade. And it starts with noticing the things that are invisible until they're pointed out.