Dan Wolfson

Context engineering is one of the latest buzzwords across the AI community. The basic idea is to both improve the quality and manage the cost of a conversation with an LLM AI by augmenting the user’s input (called a prompt).
Prompt Engineering came first. It alters the phrasing and structure of the user’s question to an LLM AI to improve the quality of the AI’s response. Context Engineering extends this idea by adding and engineering more information (context) into the user’s request while considering the cost.
For example, the size of the prompt directly relates to both the financial and resource cost of execution. So if a prompt is augmented with more context than is needed to provide the desired quality of response then money, time and resources are being wasted. If too little context is provided, the quality suffers.
Context Engineering focuses on establishing the user’s context whilst achieving a reasonable balance between the desired quality and the cost to achieve it. Approaches and techniques for implementing forms of context engineering continue to develop at a rapid pace. Techniques include more selective context generation, summarization, and compression.
One of the popular ways of augmenting context is Retrieval Augmented Generation (RAG), which allows relevant information to be prepared for dynamic lookup when the prompt is being engineered. Agents are also used in context engineering to coordinate the gathering of relevant information, structure it in a useful way, and then compress the results, often using the Model Context Protocol (MCP) to dynamically fetch information from other systems.
The context in Context Engineering includes information about the user’s previous interactions (memory), the MCP Tools (functions) it can call, the situational context of the user (is this informal browsing or is this request from a doctor researching a diagnosis, or a poor software engineer trying to figure out what this term Context Engineering is about.
Context Engineering also uses automated evaluation systems to measure the impact of different approaches and techniques to both the cost and quality of the results. With suitable observability in place, it becomes easier to experiment and innovate in a meaningful way.
Approaches and techniques for implementing context engineering continue to develop at a rapid pace with particular focus around RAG and the use of agents to dynamically construct efficient and effective prompts. Already, Context Engineering has enabled measurable improvements within the AI ecosystem.
In fact, Innovation throughout the entire AI ecosystem is rapid, perhaps even frenetic. From creating smaller, domain specific LLMs, to standardizing communications between agents, to dynamic forms of RAG that enable more relevant construction of prompts. All of these improvements are important. But there has been relatively little discussion about systematic, integrated enterprise context management – how to train the LLMs/Rags with the right data, how to get the user, business, and information systems context for Context Engineering to use, and how to feedback results for auditing and continuous improvement of the broader business processes it serves.
Context Intelligence augments Context Engineering by providing these missing pieces. A context intelligence system can survey existing data assets, ingest existing enterprise context and derive linkages between them. It provides active management of this information and serves as a knowledge base to select training data for an LLM, design and implement your RAG systems, and enable your Agents to behave with more contextual intelligence. We can apply these capabilities throughout the AI ecosystem. Context needs to flow through the entire process, not just at the time of an AI request. These capabilities span the entire AI application journey—from initial project conception through ongoing operations—ensuring context flows seamlessly from one phase to the next.
- AI Preparation:
- selecting the right data for LLM training and RAG systems – including an understanding of licensing, privacy and quality
- providing additional contextual information for LLM and/or RAG systems
- characterizing MCP tools to help select contextually appropriate tools
- AI Processing:
- expanding the user context using linkages between roles, organizations, and user privileges
- informing MCP tool selection
- informing RAG selection and query formation
- Organizing Protection
- ingesting observability data from AI system and integrating that into the broader understanding of business processes – including lineage, and auditing of the results.
- understanding which data sources are critical for AI processing to ensure ongoing maintenance and investment
- determining areas of focus for ongoing development – additional sources to manage, where to focus on data quality improvements, creating and managing contextual guardrails for queries, where to invest in additional stewardship, etc.
Many organizations excel at one or two of these areas, Context Intelligence ensures they work together – decisions made in AI Preparation inform the AI Processing and Protect. Shared context and feedback help to improve all activities.
The open source Egeria Project supports Context Intelligence. A project of the Linux Foundation AI and Data Foundation, the Egeria Project provides the capabilities to survey, catalog, link and manage information artifacts and their context.
Egeria incorporates an extensive and extensible type system that with coverage from Data Products to Data Tables, APIs, Organizations, Business Capabilities, Business Processes, Glossary Terms and Reference data – and all the linkages between them. It implements a loosely-coupled, distributed, knowledge graph with capabilities to exchange, integrate and extend information across systems.
Egeria has been designed to facilitate the interchange of data between existing tools and systems – and also allows this information to be cross-linked, augmented, and extended – without having to wrest control of this information from its native systems. These capabilities are ideally suited to supply context intelligence within AI ecosystems. In subsequent blogs we will begin to highlight how we believe Context Intelligence (and Egeria) supports the use cases outlined above through the AI application journey. Building successful AI applications isn’t just about the AI tools – it’s about all the decisions and collaborations from initial ideas to business impact. From defining what problem to solve, through ensuring data readiness, to validating results with users—Context Intelligence provides the foundation for sustainable AI success. Our ongoing work in Pragmatic Data Research and the Egeria Community demonstrates and refines how Context Intelligence supports this journey.