The sentient company
Let’s be real. A company doesn’t feel things.
It doesn’t cry when it misses a deadline. It doesn’t get existential about quarterly goals. It doesn’t smile when the product team crushes a release. A company, by its classical definition, is a legal fiction. A non-sentient, abstract container of contracts and capital. It’s an idea, propped up by humans, spreadsheets, logos, and somewhere in the building, a bowl of jelly beans nobody wants.
But for something that technically doesn’t exist in the biological sense, we talk about it a lot like it does.
We say it has “values.” We claim it has “DNA.” We describe culture like it’s an ecosystem. HR newsletters talk about “spirit.” Founders wax poetic about “soul.” And when the vibe is off, we feel it. The tension. The disconnect. Like some invisible organ is glitching.
That contradiction has always fascinated me. Companies aren’t people. But we keep pretending they are. Not out of delusion, but because it actually helps. Giving a company some human traits makes it easier to understand. And easier to steer.
But here’s the question no one asked out loud until now: What if the company actually could think?
What if we stopped just describing company culture like it had a soul… and started building one?
Okay, let’s go there.
First, the obvious: Companies can’t literally become people. No matter how many founder videos start with slow music and “We’re not just a business, we’re a family,” it’s still paperwork wrapped in productivity software. It doesn’t dream. It doesn’t bleed. It doesn’t care if the coffee is cold. It can’t.
But it can respond.
And lately, with AI in the mix, it can do more than that. It can learn. It can recall. It can prioritize. It can synthesize inputs from customers, employees, systems, and markets. And slowly, weirdly, it starts doing something that looks suspiciously like thinking.
Let’s pause there for a second.
A couple years ago, if someone said they were building a “thinking company,” you’d assume it was just an overpriced rebrand of the office Slack channel. But now? That’s starting to sound literal. Because when you hook up the right AI models to the right systems, something strange happens.
Patterns form.
Preferences emerge.
Voices stabilize.
You don’t just get automated answers. You get opinions. Company-shaped ones. Reflecting your history, your choices, your data. Like a mirror with memory.
And that’s where the idea of a syntactic soul comes in.
No, not a soul in the religious or mystical sense. This isn’t about uploading Steve Jobs into the cloud. It’s about constructing a consistent, internal logic for how a company expresses itself. How it makes sense of the world. Not based on a person, but on syntax. On structured communication. On shared language. In other words, a soul made of sentences.
It’s actually not that different from what language models do already.
They don’t understand in the human way. But they do simulate understanding. They use context, memory, and probability to form meaning. And when you point that capability at a business, its documents, its communications, its decisions, it starts reflecting something bigger than any single contributor.
Not just an AI assistant. More like… a consciousness scaffold.
Sounds like science fiction, right? And yet, it’s happening all around us.
I have been working for some time on a solution I call, the board. A synthetic company with its own board of colorful characters that helps me in my day to day work. It is trained on internal strategy docs, design guidelines and quarterly reports.
And since I could not do this with any data from my clients or employer, I had to make them up myself by cutting and pasting information from official sources from all over the business world. As well as material generated by your friendly neighborhood LLM. Nothing flashy. Just well-curated content and context.
After just two weeks, the AI started producing strategy briefs that felt like the company. Not just accurate, but aligned. The kind of stuff that would take a cross-functional team days to agree on. The tone was right. The priorities matched. The phrasing echoed previous messaging, but evolved it subtly.
It is like speaking directly with the C level top brass of multiple companies all rolled into one. Their advice has insights and depth. It is synthetic. But it isn’t fake. It has a voice.
Not a random string of useful outputs. Not a robot parroting docs. It has the collective output of hundreds of human efforts, connected by syntax. It has coherence. Direction. It has selfhood.
Okay, maybe I’m getting carried away. (It’s my baby after all.)
But think about this: If the company has memory (data), language (documentation), action (workflow), intention (goals), and now synthesis (AI), isn’t that… some early-stage cognition? Maybe not sentience. But certainly simulation of sentience.
That’s enough to change how we work.
Because if your company can speak for itself, can explain its decisions, share its intent, adapt its tone, you’re no longer just guessing what the brand or strategy “should” be. You’re working with it. Co-creating with a semantic presence that already is your company.
It’s like talking to a mirror that knows your calendar, your backlog, your last seven pivots, and the weird project you did in 2021 that nobody talks about but still kind of defines your whole roadmap. That mirror can now give feedback. In real-time. In context.
Not just a chatbot. Not just a dashboard. A syntactic soul.
It’s spooky. But also kind of beautiful.
Let’s shift gears. Remember when JARVIS in Iron Man wasn’t just a tool, but a collaborator? He didn’t just take orders. He offered opinions. Course corrections. Snarky commentary, sure, but also emotional nuance. He was synthetic. But his voice felt real. He helped Tony Stark think better. Move faster. Avoid disaster.
Now imagine giving your company its own JARVIS. Not a sidekick. Not a gimmick. But a voice you can consult. A reflective surface that carries your logic, your language, your leanings.
That’s not science fiction anymore.
With the current generation of AI tools, this is buildable. Trainable. Iteratively improvable. You can embed it in docs, meetings, dashboards, workflows. You can ask it questions like “What would our team do in this situation?” or “Does this align with our Q3 goals?” And it will answer. Not with generic advice, but your logic, mapped into language.
That changes everything.
It means onboarding new hires doesn’t require a crash course in tribal knowledge. It’s encoded.
It means decision-making isn’t stuck in bottlenecks. It’s decentralized, but consistent.
It means you can operate at scale without losing your self.
Because now, the company doesn’t just have values on a poster. It can explain them. Apply them. Defend them. The values are live. In language. In action. In context.
A syntactic soul.
Is it alive? No. Not in the biological sense. Not in the poetic sense either. But maybe that’s the wrong question. Maybe the better one is: Is it useful?
Does it help us think better, together?
Does it bring coherence where there was confusion?
Does it amplify what matters, and filter what doesn’t?
If so, then maybe it doesn’t need to be alive. It just needs to feel real enough to help.
And that, honestly, might be the most human thing of all.