LLM Communications in the Wild

Michael Aspinwall

January 5, 2026

LLMs are being widely deployed to production with real people chatting back and forth with semi-supervised agents. I’ve been working as a solo dev on one such project and these are some of my observations from seeing LLM communications in the wild. Many of these experiences are tinted by my particular circumstance and resource constraints but I expect that some of these are general problems. Here’s what LLMs are great at, where they fail in production, and a few patterns I’ve found that help.

Chat Is the Hammer

What are the strengths of LLMs?

Ok, but more importantly what are their weaknesses?

Maybe there are nails everywhere?

Businesses of all sizes run on Texts, Calls and Emails… and whatever the dreaded ERPs are in your vertical. Luckily we have friendly(ish) APIs to do each of the above. From what I’ve seen I would say that the voice is the most fickle. I would back that up by saying that just the telephony stack can be pretty nasty. Before you know it, you’re digging through PCAP logs to find missing headers on a SIP call with Wireshark… you may ask yourself “How do I work this?” and you may ask yourself, “Where is that large automobile?” I digress.

At Mason we’ve been building an agent for property managers. A property manager is fundamentally a middleman between tenants, owners and vendors. Property managers have a lot of people to speak to, even about simple issues. So Mason the suave and gentle AI steps in to get your toilet fixed faster!

Simple but effective patterns

Unit of Work

In order to have a coherent conversation you need to relate each communication to a subset of all possible interactions. Context poisoning is real — ask me how I accidentally created a dev environment where Mason thinks I have 15 clogged toilets and calls me Big Papa. In our agent this primitive is the work order which is passed around and enriched as a log, a notification and a financial transaction before it’s closed. In order to shrink the possible world of chatter the processing of a communication usually begins with an explicit router to link it to any one or many open pieces of work.

Let the Context Flow

I’ve been learning to let the context flow, especially during tool calling. The Agent needs to see all of the conversations across parties in order to make the correct decisions. The choice here is that the decision can be made with full context but the communications are roughly guarded from leaking sensitive information.

Lazy Follow-ups

Often you need to follow up with a party who is not responding. As stated above the context that comprises the generation of such a follow up comes from multiple asynchronous threads which may or may not have changed before the intended follow up message is set to execute. Due to the probability of hallucination risk (and never sending a follow up again) instead of checking each time for context changes you should wait until the set time for the follow up and check if the intended purpose of the follow up is still relevant.

Hardcoded Transitions

At some point in your workflow, you’ll hardcode domain-specific transitions that feel uncomfortably manual for an AI agent. This one is a little hard to stomach but it may be indicative of where the moat of your agent could be relative to a generic communications bot. Besides all the effort one goes through to integrate with the existing industry’s ERPs, there is some class of vertical specific workflow domain knowledge which your agent needs. This is some natural point in a workflow where the agent transitions purposes, gains or loses context and tools. Why is this hard to stomach? Because you’re building off of tools that are supposedly plucked out of I.I.D. heaven and you worry that there is probably some general pattern you’re missing.

Markdown Reports

I may come to see the errors of my ways but I love the idea of a semi-public system prompt that you build with the customer. In our use case there are large chunks of context which are roughly static and which differ strictly by customer. Markdown is a great way to organize the information for people and the LLM.

Examples and Anecdotes