Competence and Coherence: The Structural Foundations of Enterprise AI Readiness
Abstract
Large language models (LLMs) possess the competencies required for enterprise knowledge work: writing, analysis, reasoning, and the ability to operate tools programmatically. Knowledge work, however, requires more than competence. It requires context: knowledge of who, what, when, why, and in what state. Competence is a property of the model, context is a property of the enterprise's data architecture. This paper presents a formal decomposition of that architecture. We reduce enterprise operational data to a small set of ontological primitives, typed entities participating in defined state machines. We show how the enterprise software stack fragments this ontology across proprietary systems and demonstrate that human workers have historically served as an implicit integration layer bridging the resulting contextual gaps. Building on enterprise architecture research establishing that architectural choices constrain execution capability, we introduce a two-layer AI readiness framework: state readiness (is the current ontological state complete, coherent, legible, and operable?) and transition readiness (can the enterprise trace decisions through state mutations over time?). We identify coherence, the standardization of the enterprise's ontological representations, as the load-bearing dimension, and define AI readiness as a function of that coherence.
Related
Bandwidth, Signal, Noise
'We shape our tools, and thereafter they shape us' — Marshall McLuhan
3 shared topicsWhat is the Value of Data?
'Value is not intrinsic; it is not in things. It is within us; it is the way in which man reacts to the conditions of his environment.' — Ludwig von Mises
3 shared topicsAttention is All You Have
Our attention is stolen in pennies—notifications, emails, feeds—until our entire fortune is gone. We need AI not as an assistant, but as a bouncer: keeping the riff raff out so we can focus on what matters.
1 shared topic