What is Semantic Equivocation and why does it matter for AI?


In 1948, Claude Shannon demonstrated mathematically that the gap between what a sender means and what a receiver understands is not just inevitable — it is measurable. He called this equivocation. Because it is measurable, it is also governable. Semantic Equivocation — abbreviated as SemanticEQ — applies that principle to AI. The abbreviation is a deliberate parallel to the graphic equaliser (EQ) in audio engineering: just as an EQ adjusts individual frequency bands to compensate for signal loss during transmission, SemanticEQ identifies and corrects the meaning bands — definitions, relationships, contextual declarations — lost between organisational intent and AI interpretation. Semantic Equivocation is the gap between the meaning an organisation intends and the meaning an AI system infers from data, metadata, and context. The structure may be valid and the records complete, yet the model can still act on a different interpretation than the one intended. In plain language: lost in translation. Unlike a missing field or a broken reference — errors an AI can detect — Semantic Equivocation is invisible. The AI has no way to know that a term was redefined, that a field is used differently in practice than its schema declares, or that organisational meaning has diverged from the structured definition. The result is an AI that is consistently, plausibly wrong. What Shannon established matters here: because the gap is measurable, it can be closed. Semantic Equivocation is not an inevitable condition. It is a governance and data quality problem — and one that structured knowledge graphs, properly maintained, directly address. Semantic Equivocation is distinct from Semantic Drift, which describes version-level definitional change in ontologies over time, and from Concept Drift, which describes statistical distribution shifts in training data.

Discover | Discuss | Transact