4 Context Engineering Principles for building AI Agents
Context Engineer is taking over the AI Engineering space. Learn how to offload, implement one shot prompting, isolate and reduce context in this guide
Many engineers building AI Agents have realized that a major block when building AI Agents is having control over the information is sent to the LLM in every call.
It’s CRUCIAL to sent as few information as possible to the llm in order to
Get accurate response
Reduce Costs
Reduce the Time the LLM needs to answer.
BUT, it’s still important to pass all the relevant information to the LLM, so the response is correct.
Techniques
There are various techniques to achieve this, some of them are
Context offloading: Instead of keeping all the information in the state, write to a txt or md file to come back to it later, or to track progress of the tasks done.
Prompt Oneshotting: This is often used in applications that have a first step that involves research. While it can be tempting to already convert the information to our final desired output after every iteration. It’s much better performant to keep all the researched as simple text and convert it to the final input at the last step, when all the previous research has already been completed.
Isolating Context: This technique is quite analogous to the principle of “Single Responsibility” in software. Which consist in keeping functions and classes to a single purpose. In AI Engineering this could be defined as having multiple subagents inside our main agent and each of these agents have just the Context they need.
Reduce Context: This may involve various techniques like summarization or deleting messages and information from the state. This way we keep the essential information the LLM needs for successive calls. ATTENTION: This can be dangerous as important context may be also lost.
Related Posts
Common langgraph errors and how to solve them.
Understand what each of these errors and learn how to fix it. This is a summary of all the error messages I get in langgrapgh and track how it's been solved to avoid doing the error again
Create a chatbot: Part 4 - Adding memory
In this part we'll see how to make the chatbot to remember things in our langgraph chatbot, this will be useful so our chatbot can remember a conversation
Create a chatbot: Part 2 - Creating the FastApi endpoints
How to use langgraph to create a chatbot that is wrapped around by a fastapi istance and displayed in the frontend with React. This second part explains how to use fastapi to create the endpoints that will be accessed from the frontend
Let's connect !!
Get in touch if you want updates, examples, and insights on how AI agents, Langchain and more are evolving and where they’re going next.