Context matters for both developers and artificial intelligence and they can both leverage a developer platform to improve contextual awareness
Why Context Windows Matter for LLMs and Developers
Context is critical for understanding and productivity for all roles but in particular for both large language models (LLM) and developers. For LLMs, the context window defines the amount of information the model can process at one time. Similarly, developers often deal with their own version of a "context window" as they juggle multiple tools, tabs, and technologies. By understanding how context windows work for LLMs and how developers can benefit from better context management, we can explore how developer platforms, agentic systems, and platform engineering streamline workflows and enhance productivity.
(Yes, we did just put the peanut butter, jelly and bananas into a sandwich.)
Large language models operate within defined context windows, which set a limit on how much text or information they can "remember" during a given session. If the context window is too small, the model will lose important details, forcing users to repeat or reframe queries. With larger context windows, LLMs can:
However, even with large context windows, there are trade-offs. Increasing the context size can lead to slower response times and higher computational costs. Balancing these constraints is crucial for practical applications.
Developers frequently engage in "context switching" as they move between tools, browser tabs, and projects. Whether debugging an application, writing new features, or reviewing pull requests, a developers mental context window often becomes fragmented. This has significant downsides:
For example, imagine debugging a web application. A developer or might switch between an IDE, observability tools, database clients, browser-based tools, internal or external documentation, GitHub and maybe even a Slack thread. Each switch imposes a cognitive penalty, making it harder to build and maintain a mental model a given problem.
Agentic systems—effectively networks of related tools working collaboratively—actively consolidate and manage context across various workflows. They deal with ongoing tasks and/or abstract tasks. Compared to so-called "one shot" LLMs there is a notable difference. By integrating these systems, developers benefit from enhanced productivity, reduced cognitive load, and better decision-making capabilities. Here’s a deeper look at how they function and the advantages they offer:
Agentic systems operate by employing interconnected LLMs or specialized agents, each optimized for a specific task, data set or problem domain. These agents work collaboratively to:
Networks of agents provide even greater functionality by enabling collaboration between individual systems. This requires comparatively smaller context windows when compared to a single "do it all" LLM. For example:
We are a bit biased here, but let's consider a Backstage-based internal developer platform equipped with agentic capabilities:
By leveraging a system of this type a developer can achieve greater focus, faster results, and improved collaboration across teams.
It is worth highlighting that a normal developer has a smaller context window than an LLM, but human minds work differently than an LLM. Humans have a lower context switching cost and "recall cost" when remembering non-fresh information. Compared to current LLMs, you essentially need a new session or multi-agent network to approach the same levels of practical human performance.
What this really means is, that for certain situations, a human will produce better output faster when armed with the right tools. An LLM will thrive when a large initial dataset is fed into a context window. This is why we see LLM's thriving when it comes to junior and, maybe even, mid-level engineering work, but being less effective when it comes to advanced concepts.
Augmenting software teams with an agentic platform can further streamline all that goes into bringing code to production. Agents, when integrated well, help reduce context switching; allowing developers to work more efficiently and with greater clarity.
For both LLMs and developers, context windows define the boundaries of what’s possible.