Who's the boss
People have been having conversations for thousands of years. We’re wired for it. But we’re not wired for talking to something that doesn’t understand social cues, the subtle, unspoken signals that shape human interaction. And yet, here we are, trying to talk to machines like they’re people.
One of the most overlooked aspects of conversation is how much the who changes the what. Ask your kid, “What’s going on today?” and you’ll probably hear about a test or what’s for lunch. Ask your spouse the same question and you might get a rundown of errands, family plans, or the general mood of the house. Ask a coworker, and it’s probably meetings, deadlines, and maybe a complaint about Slack.
Same question. Different answers. Because the context is implied. Who you’re talking to is part of the question.
We don’t spell this out in normal conversation. We don’t say, “What’s going on today, as it pertains to your school schedule, child of mine.” We just say, “What’s going on today?” And the response makes sense because it’s grounded in the relationship.
That’s what I’ve been thinking about with MARVIN, my Home Assistant. I can say, “What’s going on today?” and the ideal response wouldn’t just be a generic info dump. It would be filtered through the lens of who MARVIN is. He might summarize my calendar, mention a weather system rolling in, or remind me that it’s garbage day. All without me having to spell out, “Give me a contextual summary of today’s events, combining my appointments, home status, and environmental factors.”
The persona itself carries the context. If instead I asked the home security persona, the same question might return a very different answer. Something like, “A package was delivered an hour ago, the front door is unlocked, and the kitchen window sensor is throwing an error.”
Here’s the interesting part. Unlike people, these AI systems don’t struggle with scope. They have a massive depth and breadth of knowledge. They don’t get confused switching between home updates and project deadlines or kid stuff and security alerts. They can access all of it simultaneously across what we might think of as different people or roles.
The limitation isn’t technical. It’s us.

We want to speak naturally. We want to ask short, open-ended questions and have the right answer show up without having to spell out which parts of our life we’re talking about. And the easiest way to make that work is to treat AI the same way we treat humans. We don’t say, “Give me all relevant updates about my personal life, professional obligations, and house status.” We say, “What’s going on today?” And the context rides along with the who.
In our human brains, that kind of context travels with a person or a persona. It’s how we manage ambiguity without even noticing it. So when I talk to MARVIN, he’ll answer one way. If I want a different context, I don’t change the question. I talk to someone else.
I don’t ask a colleague how my kid’s science test went. And I don’t ask MARVIN about motion alerts at the back door. I ask the right persona.
This kind of contextual understanding, this way of letting the who shape the what, is not just a feature I want. It is the reason I’m building my own Home Assistant instead of using something off the shelf. Real conversation is not just about language. It is about knowing who you’re talking to. And I want my AI to understand that as naturally as any person would.
Linguists have a word for this: pragmatics. It is the study of how meaning is shaped by context, relationship, and shared assumptions. Everything that lives outside the literal words.
There is probably already a proper name for this idea in some academic paper or thesis. I just do not know what it is.
I guess it depends on who you ask…
Comments ()